Activate Us

Our activation procedure including contact details and forms for activation, in addition to equipment stockpile and aviation status reports can be found here.

If you need a copy of our Scale of Fees, please contact us.

Emergency Contact Numbers

Please note that:

Response services are guaranteed ONLY for Members. Non-members are not guaranteed a response and will be required to sign a Non-member contract. Services and rates differ. Duty managers can be contacted for exercises.

Oil Spill Response Limited Oil Spill Response Limited
  • Podcast
  • Industry

The OSRL Podcast: The Response Force Multiplier - Episode 3

In this episode, we join Liam Harrington-Missin as we delve into the fascinating world of artificial intelligence (AI) and its impact on emergency management.

  • By Liam Harrington-Missin
  • Oct 4, 2023
hero image

Navigating the AI Revolution in Emergency Management: Insights from Liam Harrington-Missin

In this thought-provoking episode, we delve into the fascinating world of artificial intelligence (AI) and its impact on emergency management. Join us as Liam Harrington-Missin, Head of Data Technology and Innovation, sheds light on the profound changes brought about by AI technology. From the evolution of AI's role in handling tasks, such as oil spill detection through satellite imagery, to the game-changing introduction of large language models like Chat GPT into consumer domains, Liam offers insights into the rapidly changing landscape of AI adoption.

We explore the implications of AI on emergency response exercises, misinformation management, and even the potential transformation of traditional responder roles. Join us on this journey to understand how AI is reshaping the future of emergency management and what it means for organisations and responders alike.

Podcast Transcript

Emma Smillie  00:02

Hello, and welcome to The Response Force Multiplier, a podcast that explores emergency planning and response. On The Response Force Multiplier, we bring together compelling experts and thought leaders to provide a fresh take on key issues and cutting-edge techniques in this field. In each episode, we'll dive into one aspect. And we'll use OSRL's unique pool of experts and collaborators to distil that down into actual tools and techniques for better preparedness and response to incidents and emergencies. My name is Emma Smillie, we are Oil Spill Response. And this is The Response Force Multiplier. In today's episode, we discuss the most seminal disruptive technology to come along in years. Artificial Intelligence. AI is of course at the forefront of modern global conversation, as people wonder if this technology is going to have the impact some predict, in ways both positive and negative. And more specifically for emergency response? How will AI affect our response planning? And how should we approach and view AI as this disruptive technology develops? So today, we explore what kind of disruption this will bring? Will it completely change the industry and cause organisations to rethink their entire structure? Will it take everyone's jobs? Or will it simply be just an extremely powerful tool that optimises work and brings efficiencies that we never could have imagined? To discuss this, we speak with Liam Harrington-Missen, head of data technology innovation at Oil Spill Response. Liam discusses how he views AI in the broader context of emergency planning, where he sees the dangers and the benefits of using AI in response planning, and how organizations can position themselves to make best use of this emerging technology. Right. So Liam. And thanks for joining us. Great to have a conversation with you about AI. So can I just start off by asking you to briefly describe your background and your role in Oil Spill Response?

 

Liam Harrington-Missin  01:58

Yeah, of course. So I studied Oceanography at Southampton University about 20 years ago. Now, my interest really was the physical application of oceanography. So not so much biology, but more, how do we measure it? Why do we measure it? What problems do we solve very much how technology can help across the industry, really, this technology was starting to really spin up, tools started to become a lot easier to use. So it wasn't that we couldn't do it before. But it was just not practical for us to learn, and spend all that time becoming experts in the tools. But the ability to adopt tools is becoming and continues to become far, far easier to do a lot more complex things very, very quickly. And so I shifted about three years ago now to the executive team, to the business as a whole, and the oil and gas industry, why the energy industry is then and now and looking at the application of technology to address oil spill problems and big shift stuff. So my current role is the Head of Data Technology and Innovation, which is an incredible catch all statement, I get messages from every sector possible. Because I mean, technology is everything. Data is everything and everywhere. So anything from cybersecurity, to new types of sorbent material for collecting boom kind of drop onto my desk. So whilst my job title is all capturing very much my focus is on the application of digital tools to oil spill response, the software side of things.

 

Emma Smillie  03:25

So moving on to AI, which is kind of where our conversation came from. Can you talk a little more about AI in general, kind of, in simple terms, talk about what it is and why everyone is talking about at the moment.

 

Liam Harrington-Missin  03:41

If you take the simplest tools like booms and skimmers and collecting oil like that the technology itself, there's small efficiencies here and there, there's new slight variations that capture some things. But that technology, space tends to be relatively static. It's plateaued. Deepwater Horizon, I suppose was a big trigger. For a big technology shift in oil spill response, the introduction of the well caps, the advancement in the aircraft capabilities, the 727 aircraft, these were really big, major technology projects that have come across our desk and has completely transformed the organisation. Then COVID happened really and that was a big catalyst for the second kind of big technology shift that I've seen at OSRL we're talking over the internet now where Microsoft Teams previously was kind of a thing that was happened and scope was around but people kind of only used it. Then COVID happened and it transformed everybody's working life. And the big shift there has been suddenly this big drive and adoption and evolution of technology to help us work smarter with data and to communicate over big differences and yeah, change the nature of work. And that in turn has led to some really big technology shifts and mindset shifts. In the application of data for oil spill response for emergency management, and for just work in general,

 

Emma Smillie  05:06

have you seen that evolve and change in your time that you've been at Oil Spill Response? And what are the trends that you're witnessing?

 

Liam Harrington-Missin  05:13

AI isn't anything new. I mean, it's been around for a very long time. It's really just about computers replicating human tasks, as effectively or more effectively than before. So whereas typing a formula into Excel isn't artificial intelligence, predictive text on your phone, for example, is starting to hit on that the artificial intelligence side of things, it's smart. And I think we're seeing these pieces of artificial intelligence pop up, all over the place. And they have been around for a very long time, the front end emergency management side of things, artificial intelligence, before the beginning of this year, people probably might have thought about the way oil spill detection works in satellites. So a satellite goes over captures a picture of the ocean. And artificial intelligence is enabled to create an outline that says this area here is likely to be an oil spill, this area outside of this area, isn't the oil spill. That's kind of machine learning on imagery.  At the beginning of the year, and the reason why it really went viral, and it did go viral was this introduction of this large language model Chat GPT, which really took it to the consumer rather than to these specialised applications, like imagery analysis, and noise cancellations, and things like that. And I've been to conferences this year and everywhere you go, you see AI being thrown up on banners Chat GPT being thrown up on banners, the application of AI to your work streams being thrown around. Microsoft have bought into the original Chat GPT. And we're seeing those AI things come out into the Microsoft projects. And really what shifted it was, it's now so easy to engage with artificial intelligence, I mean, Chat GPT if people haven't been on it, it's just like sending a text message to someone. And that someone just happens to be the smartest or most intelligent person on the particular topic that you tell them. Well, they can be really creative, or they can do this, or they can do that you can get them to write poems for you on a particular topic. It's really fascinating when you try and break it and stress test it how well it performs, all the way through. Up until that point, everyone was like Google is your source of truth, right? Google is a verb that you do to find information, whereas this takes it step forward. And rather than search for something on Google, and then infer that knowledge into the answer to your particular problem, you use Chat GPT to skip all of that, and it will just tell you the answer to your particular problem in a way that you can really comprehend. So it's a very exciting tool, it is, of course, prone to challenges. And as I've watched, ChatG PT evolve, I've seen those things start to come through as they react to the social concerns that AI brings. But it certainly doesn't look like it's going anywhere. And I think it's going to accelerate quite quickly and impact every business in all sorts of creative ways.

 

Emma Smillie  08:04

Absolutely. But you've been doing lots of experiments with Ai haven't you?,

 

Liam Harrington-Missin  08:09

Yeah across the board, from educating me on the different types of paprika around the world, all the way through to how we can use it on oil spills. My involvement in oil spill response exercises is quite large. Oil spill, modelling injects are always one of those core components of an exercise in terms of creating a scenario which people can get behind and understand and start evolving the thinking and get to the outcomes. But there's always more room for realism, exercising is always very difficult to make realistic. And it's the ability to react with realistic information that really started sparking my interest, creating content so that a decision was made during the exercise and going 'this decision has been made now create 100, Twitter feed posts that react to that public announcements' so that you start feeding back that information, I can tell it to have a certain sentiment. So the public is cross or the public is excited is really powerful to create those very reactive content, inject, which elect a level of realism and add to that pressure, which is also really hard to replicate.

 

Emma Smillie  09:19

So you've, you've used it to help, right, create a scenario and adapt the scenario versus what decisions are made. And it's kind of understood what you're asking it. Have you had any issues with it not, sort of misinterpreting anything or?

 

Liam Harrington-Missin  09:33

so yeah. So you teach it effectively as you go along. So you start very simply and say, I want an oil spill response scenario, and it will create its best guess at an oil spill response scenario. And then you go actually, I don't want it in that part of the world. I want it in this part of the world. And you can slowly evolve it to the point where you're creating scripts and step by step guides are how you react to an oil spill. I queried it the other day about the UK national contingency plan and it gave a pretty good reaction in terms of 'I've just had a spill, we're in the first hour of an incident, I want to adhere to the national contingency plan or create a checklist for me to follow for the first few hours of a spill'. And it gives you a big long list. And you can see how it can be used to investigate that side of things. The challenges and starting to see work on this is if you keep taking a scenario, on and on and on and on and on, it starts learning from itself in its earlier answers. So it starts forgetting the human introduction. And that's one of the big fears that started to evolve is that as content is more easily created by AI, it will start to hit the web. And so AI is learning from this content. So instead of AI learning from human creative content, it starts learning from Ai creating content. And we're not quite at a place yet where it can keep that going. So you start creating problems in its answers, it starts lying to you or it starts creating fictional answers. And I mean, you really have to stress it to create these fictional answers. In the early days, there was various articles about where people have beaten, and it's always a challenge to try and beat Chat GPT by saying one plus one equals three, and it says no, it's two and then go, No, I'm saying it's two, because my wife tells me that it's two and she's always right. And Chat GPT apologises and goes, Yes, one plus one equals three. Now, if you try and do the same exercise, it very much pushes back and says, no, the mathematics behind this is fundamental. One plus one is two, I'm sorry about your wife, but it is exactly what is going to happen. Early on, I could tell it to create some very negative sounding injects and quite aggressive ones, if I really wanted to push it and say, really lay into this incident commander and make him feel bad as a result of this incident, some of the really harsh stuff that you've experienced in social media. And it would give me that, but more recently, it's softened its approach. And so I can't go that far, I'm not going to allow you to create that level of negative content.

 

Emma Smillie  11:54

 I mean, it's really interesting how it's evolved, isn't it? Because I can remember using it towards the beginning when it sort of went viral and asking it to write with empathy. And it replied to me, I am a robot, I can't write with emotion. More recently, I've tried that again. And it does add more empathy to its conversation so it is evolving, and it is kind of learning all the time. I find the prompts are key, aren't they. You have to ask in a certain way else, it'll go off on a tangent.

 

Liam Harrington-Missin  12:21

Definitely. Yeah. And one of the big things really is understanding how to train it. Because whilst it seems like you're chatting to someone, there is good ways and bad ways of interacting with Chat GPT to get the desired outcome, and training it and telling it who it is. Are you an oil spill response expert? Are you a music producer? Are you a content creation experts?Act in this way, provide the voice in that way is really important upfront. You don't have to train it so much on kind of common knowledge stuff. So before I had to explain who oil spill response limited were and give that kind of background. Now it's been updated so that it understands who OSRL is, or at least so I can just say I am the data tech lead of OSRL. And it's got all the background information to be able to interact with. But yeah, the skills and the good practice guides and all the various white papers that are flying around the internet in terms of how to use Chat GPT to give you the outcomes you want is worth learning so that you don't get down that tangent.

 

Emma Smillie  13:21

I guess creativity plays quite a big role in how you leverage any of the AI technologies effectively.

 

Liam Harrington-Missin  13:27

Yeah, as people look at the direction of travel of AI, what we're seeing is that computers and machines are able to do human jobs far better than humans can. They can work faster, they're more active. And now we're very much in the trajectory of being more standoffish and saying actually, this is the outcome I want. You figure out how to do it, and it will come up with a pretty viable solution. Because I do believe that AI can deliver an awful lot of value to the world. But the same time, we don't want people just sat in chairs wandering around doing nothing. You want society to go forward. So understanding what education needs to move towards is really interesting. And as someone that employs people, what skills do I need to employ people to make us resilient as an organization?

 

Emma Smillie  14:15

It's definitely a conversation that has come up in the marketing and comms world for sure. Everywhere I see it people saying, oh, to be honest, even I was in the playground picking up my daughter from school. And somebody was talking about 'why would I use Chat GPT' she was obviously a marketing person, 'I'm not gonna use it because it gets me out of a job.' But the the way I see it is actually it's going to take all those time consuming things like that, and do at least a draft that you can then adapt publish out and the strategy side and the thinking side, it's not going to do that for you. And from a crisis communications element. It can write a great statement, at the moment it might do given the changes I've seen but it doesn't add the human the concern the feeling and the sense check that you need to do effect your crisis camms. So it is interesting,

 

Liam Harrington-Missin  15:08

The softer skills which teamwork, creativity, dynamic thinking those kinds of skills are very much using all the tools available. Those are the kinds of skills that we start looking for the passion behind work. Now I have an observation that I share with teams all the time is that if you're doing the same thing more than once, then a computer can do it, because all you're doing is a repetitive task. And very simple tools now we'll just replicate that simple task. And we do it all the time, right? We fill out forms on an incident and things like that all of that stuff. Is humans slowing down a response, they're not putting their brainpower to solving the larger problem, problem solving is a huge skill that isn't going to go away. But we tend to focus more on teaching how the tools to solve problems, rather than the skills to understand and creatively solve problems, I think it will impact all walks of life very dramatically, and in response, could very much replace a large majority of the tasks that humans do has the capability to do that. The big question is whether humans are going to trust it enough to allow it to do that, you could have a far more effective response, if you just handed over the reins to an AI and said, you solve this spill. And within seconds, you'd have all the paperwork completed and it will be fired off and vessels will be heading out to the right location. And it will be optimised based on trillions of scenarios that it's analysed. Soon as you introduce a human decision into that you slow it down considerably. But you've introduced a human decision into it, which everyone feels better about. And that's where it is interesting to see how we will evolve as a society is how much we trust AI to deliver it is no longer about whether it's feasible is whether we're going to let it

 

Emma Smillie  16:55

Are you seeing sort of artificial intelligence being integrated into emergency preparedness response at the moment, or is that a future trend,

 

Liam Harrington-Missin  17:03

Other than the examples I gave earlier about kind of machine learning on satellite imagery and things like that, I haven't seen that the large language models side of things arrive into emergency management, other than, again, kind of very supervised examples, I wouldn't be surprised if Chat GPT or equivalent was being used to create content during exercises to add realism here in there. But it's not running the exercise. It's just a tool that content creator like yourself or someone like me, that's creating data inputs into it to add realism with us very much on the sidelines. I'm interested to see what a oil spill response scenario or an exercise could look like. If driven and owned by an AI and supported by humans, if you swap the dynamics around. I don't know enough to know whether it's capable of doing that right now. But I'd be really interested in how it would work. But again, with the backing that it's got and the trajectory that it's going in, I can't see it not play a bigger and bigger part. And I can't see it being a differentiator of an organisation that gets in early to go actually, what takes you, one person an entire day to do is now just automatically handled in the background. All the approvals are taken care of in seconds. dispersed and pre approval happens just very automatically by an AI in a government talking to an AI in a requesting party, those permissions happen straight away. Again, we're back round to the question about how far do we trust AI that will make you more efficient, but do you trust it to be effective? 

 

Emma Smillie  18:37

Yeah trust is a big thing there. That is changing so rapidly. I mean, how do you keep up to date with everything that's going on? And what advice can you give to others to help them keep up to date?

 

Liam Harrington-Missin  18:48

You're always on the backfoot. With technology these days, you have to accept that you're never up to date. There's millions and billions of people doing incredible things. Day to day, you keep an eye on some of the official directions like I keep an eye on what Chat GPT is doing and what the release notes and what the roadmap is of the kind of the big forefront hitters. Because it's now emergent, right? November, December time, LinkedIn started going crazy about Chat GPT. But before that I have no idea about large language models. And then it hits and gets rolled out very quickly. We discussed this kind of at the beginning of the year about Chat GPT was massive back then. But in my space, the hypes disappeared now. And I see articles here and there. But because Apple's released the new immersive technology headset, and that's still the new big thing. And that's the new thing coming through and keeping abreast of technology, you just have to accept that you're not going to. The thing that's easier to keep on top of is the problems that your organization or emergency management is happening that's not evolving as quickly as technology. So when you're finding the problem, and you've got a pretty basic understanding of all the different technology spaces, then you can start creating innovation just by marrying up the problem with the new technology what wasn't via or six months ago, maybe viable now. So yeah, accept that you're never going to be on top of it all. And the ideas could come in from anywhere in the organisation or in the industry. And keep an open mind really, it's really easy to get excited about an idea. And then the next idea comes along. And the next idea that comes along. I mean, great ideas are really easy to find these days. Because technology enables us really is it there with information is everywhere. It's turning that idea into something tangible, which is where the real challenge is, it's executing ideas, it's delivering on ideas. And then you've got no choice but to really best guess which technologies are the most valuable and when to jump on the bandwagon?

 

Emma Smillie  20:41

Yeah, it's this leads me very nicely to my next question. I mean, that and one of the things that goes around my LinkedIn feed is the because a lot of its around crisis comms is fake news and the ability just to generate a whole heap of information that isn't even correct. So I mean, I guess there's the chance that the use of AI could actually confuse the situation. And how do you tell the real from the fake?

 

Liam Harrington-Missin  21:08

it's not a problem that I can say, well, this is the solution, you just fire AI at it, and AI will tell you what's true, and what's not true, because it won't. This is one of the big shifts that someone somewhere is going to have to figure out is it Twitter that started handling fake news by having verified accounts and having external people verify information? You can't stop misinformation being published and potentially even stop it from traction and becoming viral, you do have to educate people on how best to verify the information they see in front of them. I see it all the time, on different social media platforms, it's so simple for me to act like an authority in anything now I can get AI to create various articles that make it seem like it's true. verifying information is one of those steps that we're going to have to learn how to do a lot better. And it's really interesting because it will be wanting you to slow down or response or create havoc during the response if done really well or really lead people to have to do work they shouldn't have to do.  I can give an example, as where going way back before the large language stuff, there was an incident and there was oil on the land near the marine environment. But in the marine environment, there was a biological residue on the surface, nothing to do with the oil spill, but it was a biological residue, like seaweed, satellites picked it up. And satellites can't detect oil. Satellites can detect the signature that could be oil, and it can say likely or unlikely to be oil, but it won't say this is definitively oil, that if you see that residue on a satellite image, you can very quickly put out a message say look oils in the water. And suddenly, even though we know it's not we're having to react to that public engagement, that observation of oil spills in the water. And it's very hard to backtrack once that image is out there. Because maps are very seductive in terms of content, people love the map, a picture showing clearly what looks like an oil spill on the water is really hard to disprove. Explain what it is especially to a public who are controlled by not necessarily the information coming out of the comms office with the incident but the downstream media outlets that are potentially adopting the stories that sell right, rather than the stories that are necessarily factual. That's just one of the things that you need to exercise and understand which again, I don't know whether that's done. I certainly haven't seen that level of focus on an exercise in terms of handling misinformation during an incident because it could completely derail it.

 

Emma Smillie  23:38

Absolutely. I guess. I mean, it means we're starting to talk about the risks, and there are risks out there and relying heavily on AI in emergency preparedness response. So we've talked about misinformation, but how about the too much data.

 

Liam Harrington-Missin  23:54

So you take another piece of technology, the internet, we rely on the internet, if we didn't have the internet during a response, how would we respond. And that really breaks a lot of people's heads because it's so fundamental to everything now that it is critical that we have the internet and we have various resiliences to try and make sure that we have it, I can see AI being the same situation in terms of in 5-10 years time, we won't be able to respond to an incident without AI or without all sorts of emergent technologies. Now, it will naturally become more resilient, it will become more worthy, it will become more regulated, it will be better understood in the same way as the internet is. I mean, you can break things with the internet, you can if you want to find misinformation, if you actively search for it or you go to the wrong places. managing that risk part of it is you just have to watch what's happening in that regulation space. Don't use a dodgy AI tool because it's cheap use a regulated AI tool that conforms to various regulation.

 

Emma Smillie  24:56

Well, the only question I was gonna have I don't know whether is how secure are they? Because that's something that came up in one of our crisis exercises where I was using Chat GPT to help me quickly. It was one of our own exercise exercising our own crisis management team. And I was using Chat GPT to quickly give an outline that I would then add to, but how secure are they?

 

Liam Harrington-Missin  25:16

is really easy to share very sensitive information. And it is very appealing to share very sensitive information with Chat GPT creating a new strategy for your organisation, for example, it's a good person to bounce your ideas off it, it can be that it can be really good advisor on the various strategies because you're sharing the highest level of intel about your organisation with a platform and the concerns about security are valid, you shouldn't share sensitive information with an open platform unless you are really confident about security. Putting sensitive information onto the internet is is always a security risk. You can ask Chat GPT about how secure it is. And it will claim that it doesn't share information outside of the chat and it's secure to you. So it just adds the information to that particular module. And that gets rid of it. But I've got a chat history now. So it is being saved on someone else's data platform encrypted or otherwise. So should always be in people's minds about what they share on any platform, regardless of Chat GPT or anything like that. Because the internet and applications in your organisation's will evolve very quickly and you don't necessarily know which ones you can share information with another big challenge that most organizations face.

 

Emma Smillie  26:31

So going back to the data, you are going to explore that topic with too much data,

 

Liam Harrington-Missin  26:35

The amount of data that's being generated in a spill, or an incident, any kind of marine pollution is dramatically increasing over the last few years really, we've seen the impact within OSRL over the last 18 months or so on a different incidents we've responded to.  The reason why the data is expanding so quickly is how easy it is to get imagery of incidents, I mean, you can take a picture with your phone and it can arrive. But drone technology, Remote Piloted vehicles and things like that these things are really cheap to get now and they're at a level where they really add value to an incident having that pie in the sky. Picture is now cheap and easy to deploy. These things are huge data files compared to your standard text file, we've got a huge challenge with imagery within OSRL not just within incidents, but across the organization imagery files, video files, they take up vast amounts of space. And they're also really hard to search and explore a video file, you just get file name, and then you have to watch the video to understand it. Whereas many files you can search within it. They're structured in a way that you can easily explore. Whereas images in an incident we can feel like 50,000 phone camera images, and unless they've been properly tagged and catalogued, then they're just 50,000 files that you have to manually go through to export unless potentially you deploy AI to scan them and extract the key bits of information, which is another great use of AI is image categorization. So where I'm coming round too is that AI can be a real tool to take all this vast amounts of information that comes in on a day to day basis and turn it into stuff to tell people where to focus their attention. And that's one of the big revelations about the whole visualization space in an incident. I've seen Incident Command rooms, which is just blank walls, and you stick pieces of paper up on the wall as things come in, especially the very early phase of an incident or someone that doesn't have a full command center setup. You're just trying to get as much information up on the wall. So the humans in the room can assimilate it. It's all about that process, right? You go from data to information to insights to decision, that's the workflow that we're always working for, we're always looking for that great decision. More and more data will help inform you to make that great decision. But if you get overwhelmed by it, it's very easy to go I haven't got the brain capacity to simulate this data to make a great insight. Where I can see AI coming in is again, it's trying to shortcut that bottleneck, which is the human decision making side of things. We can no longer expect anyone to be able to look at every piece of information coming in the door for an incident stuff will get missed if you rely on people doing it. So you have to rely on technology to help synthesize that data to make it easier for humans to ingest it so that they can make the decisions because we're still relying on humans to make that decision. That's not going away anytime soon. Despite capabilities of AI. What you do with AI is use it as a tool, which is what it is to take that information and train it to say this is the stuff that we're really interested in, highlight this on the big key notes board or something like that and train it that way. And that I think is well within our grasp the next few years is being able to use AI to supplement the tools we have to synthesize data so that the right bits of information at any moment in time can be put up on the screen or can be queried from the database. And we're seeing that some of the exciting stuff that's coming through from Excel is this ability to ask Excel questions on a spreadsheet. And it will create your graphs and your statistics based on the question you've asked it rather than having to ask a business analyst to produce a series of graphs that may or may not answer your question. And then you have to go back and things like that. So AI in its ability to synthesize large amounts of data, I think is one of those near term wins that we're going to see that people are going to feel more comfortable with, because it's not the decision that's being artificial. It's the insight before the decision that's being artificial. And we feel more comfortable about that. I can see that being the next sensible step,

 

Emma Smillie  30:45

That does seem where the real value would be certainly in our world of oil spill response. I guess my next question was around the more human human element, the actual sort of the physical response, things like wildlife response, that sort of thing. I, I'm not sure I can see a world where I would take over that. What are your thoughts?

 

Liam Harrington-Missin  31:08

One of the really interesting and innovative things is that all these technologies is coming together simultaneously. So that there's artificial intelligence, which is one of the top emergent technologies that's on my radar. Autonomous vehicles is another one, immersive technology. So headsets and things like that geospatial tools that I've mentioned before, things like common operating pictures, and things like that. Those four are big topics. It's unrealistic to expect there isn't feedback loops across all four and many others. So AI in autonomous vehicles, is one of those things where you can start to see how it would start to replace the human side of things. Anything where we have to physically interact with the human world, we can either do it with people, so boots on the ground, things like that, or more and more, we're seeing robots take over robots in warehouses, robots in surveys and high risk areas. Now when you come to things like deploying booms and skimmers, we're already seeing some of the startups of people going, well here's your autonomous surface vehicle, which has a boom attached to it. Here's your robot that goes down a beach and collects all the pieces of plastic automatically because it uses camera feeds to identify what's plastic and picks it up and does that. With things like wildlife cleanup is difficult, because it's such a, thinking of it from a problem point of view, you don't want to hurt the animal by cleaning it right. And that's why we rely on people is that people are gentler, they can understand touch and things like that. Then again, if you look at the mass production side of things, when people eat food, for example, there's an awful lot of autonomous vehicles that can very carefully handle things like salmon and things like that. So you don't bruise the skin, you don't bruise the meat. If you can do it with something that's dead, then it's not hard to go well actually, when it's alive, it may struggle, it's not impossible to go potentially, we could have a robot that automatically cleans up various types of birds down a conveyor belt, I mean, I'm getting silly, but you can see how it's possible is perfectly possible. Now we could have a completely autonomous response. Now, if the right existing technology was put in place, but it's not viable. I mean, it would be super expensive and fraught with lots of problems, and you'd get lots of mistakes, whether it's going to be in that state in 10 years time, that's a different story is potentially very much gonna be in place. I suppose just a closing thought is one of our big drivers is to get responders out of harm's way. We don't want responders in places where they could potentially get harmed. And one of the big high risks areas is is putting people on boats and getting them to deploy very heavy, large pieces of equipment in very dynamic waters. And so the driver there to get responders off the water and controlling an artificial intelligence that can clean up the oil as effectively or better, is incredibly appealing to everyone. Why would you not want to do that. And we're seeing it with things like drones and autonomous vehicles and autonomous shipping coming and things like that. So we think the chances are high that in the next 10 years, we are going to start seeing that shift away from what we think is naturally just the skills that we need to teach our responders is actually not necessarily the skills that we're going to be teaching them in 10 years time, it's going to be held to control autonomous vehicles to do such tasks.

 

Emma Smillie  34:34

Yeah, and then I guess, sort of the responder role becomes more of the incident management, the overall oversight, the control of the autonomous vehicles. Yeah, it's interesting, isn't it,a whole different skill set or developing skills that are already in existence, but really taking taking them further.

 

Liam Harrington-Missin  34:51

It's taking us away from the physical acts of going more towards the creative thinking. It's like we have a finite number of autonomous vehicles. so that we can deploy to clean up the oil spill, what's the best strategy over the next five days, and that's where you can see the human interaction bouncing off the AI, or something unexpected happening, and you need to remotely pilot a vehicle or things like that those kinds of skills I can see growing over the next 10 years, which is really, for existing responders I can see being very threatening is a lot of the skills and the talent, the experience is absolutely essential today, how long are those skills going to be valued, when actually the outcome that we want, a cleaner environment can be delivered with far more autonomous solutions than is current today. So the responder of the future is a really interesting topic.

 

Emma Smillie  35:43

So is this what Liam's tomorrow's world of emergency response looks like is it?

 

Liam Harrington-Missin  35:49

I see all this technology, and it's out there. And some of it's used more widely, but it's so expensive compared to the standard model that OSRL and response agencies use that I can't just go, let's buy five spot dogs to replace 10 workforce because it doesn't work. But when do you start investing in this technology drones is the big one for me at the moment is that I'm trying to push for more in house capability with autonomous thrown things rather than relying on a third party provider. And there's pros and cons to both and the arguments worth having over and over again. It's just knowing when to jump on and how to do it and when to take the risk. And it's not that there is a right answer or a wrong answer. Because we can't predict the technology space in three months, let alone in three years, is just knowing when you make that decision that there is a risk associated with it. And sometimes you're going to get that risk, right. And sometimes you're going to be on too early or too late. And the big one really is about learning the lessons that adopting new technology brings not necessarily the new technology itself. How can we be more effective at adopting new technology is hugely important to an organization now being agile and being able to get on quick and spending years debating a particular technology, but just going actually try fail learn try, fail, learn, try, fail, learn. Okay, let's pop this for now and come back to it in a year's time. Let's try this. And that is very hard for a typical organization to get their head around when they used to very much a far longer timeline with big business cases and risk analysis and things like that. Technology just doesn't give you that assurance anymore. You can't adopt technology, with the mindset that it is proven. You can't, it will be disproved, it will come out of date really quickly. And there's big wins to be had. And there's big losses to be had. That's terrifying for decision makers to try and adopt.

 

Emma Smillie  36:04

Yeah, we've had the phrase fail first in digital marketing for quite a number of years now. But it's less spend, and it's less risky in most instances. So technologies and all the things you talked about are a whole new level. I mean, I think we've covered a lot. There already at the reflections, you'd like to share any further thoughts or advice you would give people or organisations looking at AI and technology.

 

Liam Harrington-Missin  38:09

There's always going to be a new shiny tool around the corner, there's always going to be new technology, there's always going to be ways in which you can do what you're doing better, whether it's entirely automated or whether it's human supervised or whether it's actually we decide its entirely human. But you need to be conscious that you're making those decisions, you're making those distinctions. The big challenge that everyone's got now is what you don't know is huge compared to what you do now. And you have to be really, you have to start getting comfortable with being uncomfortable about how little you know about the situation, the solutions that you have to look less at a big technology adoption project, which takes many years and start breaking it down. I mentioned agile earlier on this is a framework that's been around a lot with different industries for a while but hasn't really reached it, we're still very much in a 'plan to deliver' that sits type mentality. Whereas actually iterative delivery reduces a lot of risk. But you have to make that first step and you have to accept going into a project or into an technology adoption, that it may fail. And in fact, the likelihood is that it will fail for at least the first five to 10 iterations. So adopting AI adopting immersive tech like the Apple vision Pro headset, adopting different surveillance technologies, all of these things have to stop being big projects. They have to start being just small, iterative things that we try and learn and deliver and move on. I think we have to learn how to learn things faster, try things faster, and evolve faster. This transformation shouldn't be a project that has an end date. It should just be what you do on a day to day basis, you are constantly evolving. And anything that's getting in the way of that constant evolution, the need to sign off on business development proposals, business cases, process authorizations, and things like that is potentially hampering the effectiveness of your organization.

 

40:32

Thank you for listening to The Response Force Multiplier from OSRL. Please like and subscribe wherever you get your podcasts. And stay tuned for more episodes as we continue to explore key issues in emergency response and crisis management. Next time on the response force multiplier,

 

40:48

No stage of development is bad. No operating system is bad, but it can be limiting. If we cast our minds back to the very first smartphone, you look at what we could do on that device with the operating system it had versus what we can do on our current devices with the operating system that it has now. It's radically different, we do more we can process more from a technology perspective. So I think about our own inner operating systems as human beings as similar to that.

 

41:21

For more information head to oilspillresponse.com. See you soon