Ep.66 AI anxiety: managing the unknown for better business decisions

Kim Sherwin Kim Sherwin
Director of Capability and Knowledge, Aurecon
Shannon Sands Shannon Sands
Software Developer, Nous Research
Dave Mackenzie Dave Mackenzie
Managing Principal, Digital – Australia
Leeanne Bond Leeanne Bond
Non-Executive Director
Andriy Mulyar Andriy Mulyar
Founder and Chief Technology Officer, Nomic
21 August 2024
15 min

Maria Rampa: Hi, I’m Maria Rampa, and welcome to this episode of Engineering Reimagined.

Our last episode about AI resonated so well with our listeners that we decided to make a part two, delving into what’s been coined as ‘AI anxiety’.

While AI undoubtedly has many benefits, the technology’s potential to change the way we work forever can feel unnerving. How do you price services if it takes less time to deliver? And how do you decide where to invest in AI in this critical stage of R&D?

Today’s episode features some of the brightest minds in the industry who spoke on a recent panel about what keeps them up at night about AI; the ethics around usage and data privacy; and the risks and benefits that organisations need to be aware of.

Kim Sherwin and Dave Mackenzie from Aurecon join Andriy Mulyar from Nomic, Shannon Sands from Nous Research and non-executive director Leeanne Bond.

Together, they’ll provide valuable insights into how businesses can navigate the complexities of AI adoption, ensuring it not only drives innovation but also aligns with ethical standards and practical realities.

+++++

Kim Sherwin: Going to get us started, what actually keeps you awake at night about AI?

Andriy Mulyar: So the expected answer to this is like something about world domination or something scary like this. I think that is like a risk that is very, very far off in the future, if even honestly a risk with the way current systems and the current plateaus these systems are getting. Biggest thing, in my opinion, is that access to the technology is going to stop at very fancy demos, and the kind of capabilities technology is able to achieve is never going to go past that demo stage. I think there's a really big disconnect between a cool demonstration that you can do on in a video or, you know, chatting back and forth with OpenAI's new 4.0 model or something like this and actually like making a change at a business or like improving some sort of like process. Because really what it requires is domain expertise applied with the technology. And if we don't let those people who actually have the domain expertise understand how to work with the tech, then you never actually see that transition into practice. So that's the thing I'm most keen on not having happen is everything just ending at the demo stage.

Shannon Sands: Like Andriy I think that the risks are greatly overstated as far as sci fi robots taking over the world type of things. I definitely think that there is more of a risk of things stalling out and some sort of new AI winter, just sort of a failure to translate research and demos into actionable, value adds. But I don't think we run a significant risk of that yet either. I just think that it's a matter of making sure that the AI developer community embrace some good engineering practices and actually deliver on what the promises are.

Dave Mackenzie: I actually think, the energy required for these models. My fear is that we have this amazing technology that could really transform how we work, and all we ever do with it is make a better chatbot for Facebook or something. That's the one thing that keeps me up at night is actually how do we convert the technology into meaningful change that improves everything that we do in our communities.

Leeanne Bond: For me there's a couple of things. One is control. And whether I know that it's being used and now can approved to be used or not. And then the other one is traceability. So, if we're doing some work, we actually know where it's being used and have an ability to trace and verify.

Kim Sherwin: Andriy, what do you see is the biggest risk, whether it's adoption or lack of, or privacy type things.

Andriy Mulyar: I'm gonna talk on the adoption side. I think you're going to see a really big divide in organisations as like their growth in how they're able to innovate in whatever industry they're in, based on there sort of like adoption of these systems. But that adoption is not going to be, can you hire the best AI engineers for your company? But it's mainly, do you actually have the best practices in your business around unlocking access to the kind of computational resources that are required to build and run AI systems, actually finding those really important problems that your business that have that actual ROI that's over the cost of actually running these systems in production because things like large language models are not cheap to run. And the biggest risk I think exists in companies not realising early enough that you need to make investments in buying GPUs, right? If you don't have GPUs at your company, you're going to fall behind in innovating in this space. You can't hire the best AI talent if they don't have computational resources to be experimenting and running with. Like one of the things that we see at Nomic all the time, is whenever we get a new hire, one of the things that they ask us, do you have any questions at the end of an interview that's always the question you ask. They ask how many GPUs do you have? And if the answer is not high enough, they don't want to work for you, right? If you want to retain that type of talent, you need to be making that sort of investment. The investment isn't just finding the best place in your company to apply AI, but really building up an infrastructure layer your company to support that sort of growth going forward. And if you don't do it, your competitors will and they'll realise it and out innovate you.

Shannon Sands: Following on from that, just not correctly identifying the real pain points that AI would actually make a real difference in. Like, I know it's a real novelty at the moment and everyone wants to throw a chatbot into everything, but there probably isn't necessarily a necessity for that. And conversely, there might be other opportunities for other types of AI systems that would be a better fit for something that works in the background. Maybe it's analysing sensor data or reporting or something like that. So just really moving with a strategy that's both willing to take that plunge into the new technology in the new space, but at the same time keeping the feet on the ground as far as what the current capabilities are and how that's actually going to fit into the business.

Kim Sherwin: Where do you see the industry in five years’ time?

Dave Mackenzie: I hope in five years’ time we're not talking about generative AI. I think it should just be part of how we work and integrated. And if our digital strategy is working, it will be institutionalised in how we work and be how we're competitive and just go to market. I think right now it's a new technology and a tool that's on the precipice of getting accelerated into our core business. But in five years, I hope we're not still talking about it.

Andriy Mulyar: The analogy I like to draw here whenever I get this question is, think about how businesses who are like technology forward or had large technology arms operated in the 2000s, you had your own data centres, you had these giant IT teams running computers, putting in server racks, pulling out hard drives when they broke. And then, in the 2010s, everyone went cloud forward. And you had this big cloud transition and now you're looking back on it. And you know it's been about a decade since a lot of organisations did this. And you have these big 3 or 4 major sort of like first rate cloud providers. There's a lot of choice. The thing that actually ends up being impactful in the future is, how much do you actually trust those providers in providing the level of service that you want? How much are you able to serve your end customers through those services? So as the technology matures, it's going to really boil down to those core business principles, because access to generative AI and in these systems is not something that's going to be locked behind the doors, like open AI in this sort of thing. Every single organisation is going to be able to build their own systems, is going to be able to access it, have access to these building blocks, and it really comes into those actual things that transform the business and become important. So I like to always draw an analogy to the cloud transformation that happened in the 2010. It's going to be probably likely be identical in how this thing all resolves in the next couple of years.

Kim Sherwin: I've been reading quite a bit about AI anxiety, and I admit from time to time I suffer from it. Should we be doing this? Is this the right thing? There’s the whole ethics thing. And then there's, we're not going quick enough. We need to do more. We need to do more. And I feel the sentiment is quite real. How do you think we manage that and take more people on the journey and kind of embrace what this future might be for us?

Andriy Mulyar: So I think there's two vectors when I think about AI development, right? There's capabilities, can we interrupt the chatbot we're talking to and get a response back that feels like a real human conversation. I'm always pushing the boundary of what these systems can do. And then there's, should we have built this thing in the first place? Is this, injecting some sort of bias into the world? It's going to systematically discriminate against a certain group of people because these models are functions of their trading data and the data we're feeding into it, has these biases from us humans being the ones producing it. And when I think about these sort of risks, I really think people are over inflating how there's a lot of marketing value that goes into promoting capabilities of AI systems. Because, you know, that's how you scare people into maybe buying things or making decisions that, you know, are made out of fear as opposed to made out of like logical reasoning. I think people should really consider, like always, especially if you're buying it for like businesses or for the AI system for a business like what is the ultimate impact this is going to have for, not just the technical teams, not just the first adopters of technology. How do you make sure everyone in the company can access whatever system you're purchasing or whatever system you're using there?

Dave Mackenzie: I oscillate between the same feelings of anxiety around we're not going fast enough and we're going too fast. There are times where we want to go really quickly, but there are times where we want to slow down and be considerate and really understand what we're doing and how this technology is going to change a process or a workflow, particularly with automation, particularly when we're looking at our business model and how we go to market. We need to move quickly, but we need to slow down and consider those things very, very carefully.

Leeanne Bond: When you were saying that, I was thinking of Daniel Kahneman's thinking fast and slow. There's times when you think fast and there's other times you need to sit back and look.

Kim Sherwin: What's one bit of advice you would give to just get started?

Andriy Mulyar: Started with what?

Kim Sherwin: AI.

Andriy Mulyar: Always be open to trying new things. I know it's kind of hard when there seems to be like a new advance or some new iteration that comes out every day. I personally sometimes get to a point where I'm just like, if it's important, it’ll maybe bubble up and someone else will suggest it for me. Having that sort of willingness to always try the latest innovation shows you, number one, like what are the actual flaws in these current systems. There's a lot of flaws and holes in a lot of these systems. But also there are things that are generally useful. And the only way to really experience that is to try it yourself. The second-hand accounts you see on push notifications on your phone, or maybe Hacker News or gets into the email threads or something like this, that's a very filtered down approach that maybe has like marketing angles to it. Nothing beats actually trying these systems yourself. If you can program running that code yourself, so you can use the system trying it.

Shannon Sands: I'd say, make use of it. Look for opportunities to use it and improve your own productivity and your day-to-day tasks. Fears and reservations about it. It's like anything, it's a new technology. There's a lot of unknowns. But familiarity will remove a lot of that. You'll realise that there are some really powerful things you can do that are just full on force multipliers, and you'll be way more productive. But then there's also lots of things that it just can't do yet and may never even do. And so just getting your head to the space where you can intuitively understand, okay, so this is the kind of capabilities that we can expect. And this is what we can expect to not work. Then when it comes to actually making a decision around implementation, you'll just have a better understanding of okay, that's probably going to work. That's probably not gonna work. It's a bit like if we were having this conversation in 1992 and someone was asking me about the web, I'd just be saying, just use it and get used to it because it's not going anywhere. And yeah, start to think in terms of your specific business, how this is actually going to fit in for you. But it also, the fear element and everything will go away. It will become very mundane in the next five years.

Kim Sherwin: Super. Thank you. I'm going to stop hogging the microphone and we're going to throw questions out to the audience. The arms have flung up straight away.

Clive Silva: Yes, Clive from PSH group in Adelaide. My question is about commercialisation and monetary models of what we're doing. Your clients that are using your product, how many are using it for their internal problems versus solving client problems? And those that are solving client problems, so for example, 90 days to one week, how have they stopped Costco saying we only want to pay 10 oer cent what we were paying before? Do you do your customers come to you for advice on how to monetise the use of your software?

Andriy Mulyar: The way you solve that problem of like not having not happen is not charging hourly, but you charge for the platform fees and licenses, this sort of thing. But that's kind of hard, right? Because then you have to build a really strong cases for why do you want to sign, you know, what do you want access to the software for a year or two. What are the extra benefits coming here?

Clive Silva: Have we thought about how are we going to do that? I mean, I've had a chat with a few people about this. The vast majority of our clients, especially, our really big clients, they pay for our time. And so we have to shift that or change clients.

Dave Mackenzie: Yeah, we're literally thinking about that every minute of every day at the moment. It is absolutely a hot topic. To be frank, we don't have all the answers yet. We've got views on what that might look like, but there's no solid answer. I don't think there's going to be a, here's the silver bullet solution. One organisation we met with, they’re a law, firm for their portfolio clients, part of their strategy they've said over the next five years is how do we transition them to be unlimited advice at a fixed rate. So service as a product. That's interesting to me because it's, we're not going to be the only people who have to try and solve this problem and see how it shifts. My big concern, though is, the big contracts that we have, and is industry ready to sort of have a different kind of conversation.

Clive Silva: Is this the time for capital expenditure, like is the industry solidifying, technologies and processes solidifying or are we still in R&D environment where things, you know, an investment now could be lost easily.

Shannon Sands: Look, honestly, I think it's kind of hard to say. Things are moving very quickly at this point. I do think that we're still very much at an R&D phase to some extent. I know that things are moving so quickly at the moment. I couldn't even venture a guess what's going to be possible in two years’ time and to be able to say, okay, we need to spend X, Y, Z on GPUs at this point, I don't know. On the other hand, you also want to be building that capability in the first place in terms of just your internal competencies and getting familiar with the pipeline, because it will be something that's deriving from how we’re currently operating in terms of trading models and building data and all of those kinds of good things. It'll be still something that builds off of that. So essentially, it's a question of how much you want to commit to participating in that process of research and development and help to define those best practices that nobody has yet, versus wait and see and go, okay, good. So now there is best practices. Now we have got a good idea, but we’re now also now trying to spin up when potentially other competitors have already taken the plunge. So maybe it's sort of a question of half and half.

Kim Sherwin: Super. Thank you so much.

+++++

Maria Rampa: We hope you enjoyed this slightly different episode of Engineering Reimagined.

Our discussion today highlighted the pressing concerns and exciting opportunities that AI brings to the table. 

From the necessity of bridging the gap between impressive demos and real-world applications to ensuring the ethical and controlled use of AI technologies, our panel provided a comprehensive look at the landscape of AI in today's business environment

If you found this episode insightful, be sure to subscribe on Apple or Spotify, and follow Aurecon on your favourite social media platform to stay updated and join the conversation. By engaging with these discussions, we can all contribute to a more thoughtful and informed approach to the future of AI.

Until next time, thanks for listening. Stay curious, and keep reimagining the future of engineering.

Integrating AI technology for better business decisions

While AI undoubtedly has many benefits, the technology’s potential to change the way we work forever can feel unnerving. How do you price services if it takes less time to deliver? And how do you decide where to invest in AI during these critical stages of research and development?

This episode of Engineering Reimagined features some of the brightest minds in the industry, who spoke on a recent panel about what keeps them up at night regarding AI, the ethics around its usage and data privacy, and the risks and benefits that organisations need to be aware of.

Kim Sherwin, Director of Capability and Knowledge at Aurecon, leads the discussion with Andriy Mulyar from Nomic, Shannon Sands from Nous Research, David Mackenzie, Managing Principal Digital at Aurecon, and non-executive director Leeanne Bond.

Together, they provide valuable insights into how businesses can navigate the complexities of AI adoption, ensuring it not only drives innovation but also aligns with ethical standards and practical realities.

Additional resources



Unfortunately, you are using a web browser that Aurecon does not support.

Please change your browser to one of the options below to improve your experience.

Supported browsers: