Technology

AI: transforming the unknown into an asset

Dave Mackenzie and Andriy Mulyar | 10 July 2024 | 17:46

Podcast Transcript: AI: transforming the unknown into an asset

Maria Rampa: Hi I’m Maria Rampa and welcome to this episode of Engineering Reimagined. 

In 2016 artificial intelligence expert Geoffrey Hinton famously declared that radiologists would be out of a job in 10 years given the specialty was going to be completely taken over by AI.

Well, here we are 8 years on in 2024, and there certainly are still plenty of radiologists around. While not all of them have had to embrace AI to survive, those that have are much further ahead in their careers and job efficiency.

The same story is set to play out across many professions as we all learn how to work with AI in our day-to-day roles. The winners will be those early adopters who learn to optimise and augment their work with AI.

The accuracy of AI responses and the transparency and reliability of training data are also key issues for users. Considering the sources of an AI’s knowledge is just one way to ensure AI responses are correct and unbiased.

In today’s episode of Engineering Reimagined, Dave Mackenzie, Managing Principal, Digital at Aurecon speaks with Andriy Mulyar, Founder and Chief Technology Officer at Nomic. Together they discuss how emerging AI technologies are being adopted within industries to increase workflow and professional expertise, how generative AI programs source and collate their responses and what causes an AI ‘hallucination’. Let’s listen!

Dave Mackenzie: Welcome, Andriy, to Aurecon's Engineering Reimagined podcast. One of the things I've been thinking about a lot lately that AI's been around a long time, since sort of the 50s and/or early 60s. But recently, as I'm sure everyone's aware, it exploded onto the scene with ChatGPT really capturing the zeitgeist, and everyone's imagination and a whole range of different use cases, some weird and wonderful and some amazing. One of the things I often get asked about is, these models feel like a bit of a black box. Where does the information come from? How can we understand that? And how can we have any level of assurity over the responses coming from its models?

Andriy Mulyar: So I guess AI really came into the spotlight in November of 22 with the release of ChatGPT for OpenAI. But these models and the systems, the engineering going into them have existed for almost a decade. The technology that drives them. And it's a combination of two factors. Number one, the modelling. So, the techniques to be able to go in and produce models over type of data that have these kind of capabilities that you see in ChatGPT two decades ago just didn't even exist. The computation wasn't there. The resources, the GPUs, that sort of like gold that powers these systems didn't exist. And most importantly, the data, the data that is instilled into these models, that these models compress into the entity that you interact with didn't exist either. The core driver behind the quality and the capabilities of these AI systems that you work and interact with is the data that goes into them. It used to be the case that you train an AI model on hundreds of thousands of examples, maybe even millions. Nowadays you train AI models on all of human history recorded into the internet, and these models compress that information into a format that's queryable with like human language. And what's really crazy about this is that this compression function oftentimes, as you said, is a black box. What information does the model contain? What biases does that model contain? What can that model do and what can it not do? It's really hard to probe these things, and it's really hard to transition from a cool gimmick or something that, you know, might seem like it has a lot of value to something that actually translates to real value in the world because of that.

Dave Mackenzie: We often hear about is this idea of how models hallucinate and that being a barrier to really using the power of these models for technical fields or fields that require a real sense of accurate answers, that are dependable and reliable on, and so is there a way of sort of demystifying that black box to create that assurity?

Andriy Mulyar: Yes so like what is a hallucination? So at its core, right, what these AI models are, these transformers trained on large amounts of text. They're really a compressed version of the entire set of text that it was trained on. So all texts like the whole internet. That model is a compressed queryable black box of that data set. And it turns out when you compress a lot of information, things get noisy, things get lost. Hallucinations are an artifact of that sort of process. So what this means is that oftentimes when you ask a model like ChatGPT a question about some sort of like general purpose domain, maybe something that appears a lot in the internet, for example, like the constitution of a country, it'll be able to answer questions really well about this. But when you ask something about engineering, it might not do that good of a job because there's not that much data in the world that it had to gather samples from. And what's crazy there is that these hallucinations are not something that you can very easily root out of the model. And this causes a lot of problems because you want to depend on these models for real world purposes.

Dave Mackenzie: We've just been talking about RAG or retrieval augmented generation. Could you just explain that and expand on that a little bit further for our audience?

Andriy Mulyar: What RAG allows you to do is it allows you to ground your language model on data that is specific to the kinds of things that you want to allow it to be able to talk about. And by grounding it to this data, it reduces the possibilities of that model hallucinating because you're maybe giving it evidence that it previously would not be able to gather from its giant pre-training data set, for instance, the whole internet. When you ask a model to do something, for instance, I might be able to ask the model, how do I build this bridge with these set of components? The response you might get out of a model would be very generic if you ask ChatGPT to do this. But an organisation like Aurecon has tons and tons of documentation that might relate to those components, might relate to building of bridges, that you can then pull in and give the model its context to use, that reduces the chance that the model's output is like not useful to you because too generic, but more specifically that it's going to make things up.

Dave Mackenzie: Certainly one of the things that we've been developing here at Aurecon has been retrieval based solutions, like Aurecon Recall. And I was wondering if you could just talk to how that changes how these models function.

Andriy Mulyar: It's good to see that people are taking the right approaches to be able to fight this sort of problem. Turns out hallucinations can't very easily be trained out of a model. Putting a bunch of data that's maybe very specific and updating the models training process doesn't actually do that good of a job at removing hallucinations. One way to avoid hallucinations is what you mentioned here, a system like Recall. What it allows you to do is for a specific domain like engineering designs, what that allows the model to do is condition specifically on text that comes out from that domain, in addition to the sort of general knowledge that it’s acquired from training on a large amount of internet text, to be hyper diligent at performing well in answering questions in that domain. These techniques like retrieval augmented generation are really key solutions to allow language models to not hallucinate when it comes down to specific domains.

Dave Mackenzie: That's been our experience as well where we've been able to relay technical information with a degree of confidence and accuracy. All these things start to build up a picture of what explainable AI as a concept is. And I know that's part of Nomic's mission and passion. And I was just wondering if you could, one explain the concept of explainable AI and maybe just talk to how Nomic plays in that space?

Andriy Mulyar: Ultimately at Nomic what we do is we make it really easy for practitioners, or ideally people who don't even have machine learning experience, to be able to interact with the kinds of data sets and the kinds of models that exist in the modern world, to be able to explain how those models are acting. And it turns out that how models act is a function of the data that they're trained on. So explainability of models oftentimes builds into explainability of the data that goes into the models.

Dave Mackenzie: I think you touched on something that's really important and certainly something I'm sure many organisations would have encountered this where you have Joe in the corner, he's been in the organisation for 30 years, knows everything about a given topic and is time poor. So in terms of how to engage with that SME and get information on them can be quite difficult. One of the things that's exciting for me is actually, how can we give someone like Joe the tools to sort of own data in their domain, so that we can in a way, democratise access to Joe. Nomic has your Atlas platform. I was wondering if you could talk to how Atlas actually does that idea that you don't need a data scientist, you need an SME who can own and manage their own information.

Andriy Mulyar: So we were talking about black box AI systems like ChatGPT and the ability for these systems to hallucinate. And then things like RAG or like solutions to stop the hallucination. One of the key components to any sort of RAG system, the thing that drives that personalisation and de-hallucination of models is this thing called embeddings. Embeddings are basically a mechanism of representing data for computers that allows computers to interact with data semantically between each other, it allows computers to talk to each other with meaning, not just about text. What we do at Nomic, and especially our Atlas product, is we allow the everyday human to interact with that semantic object that is an embedding. So that example that you had of this really experienced subject matter expert you might have in an organisation, unlocking what they're able to do really is a process of understanding relative to the data that they're working with, what are the kinds of operations they do with the data that complete their job. Being able to empower anyone in an organisation, not just a machine learning expert or a machine learning engineer, or even somebody who can program, with the ability to interact with high dimensional data sets like the ones that go into and out of generative AI models, is the core of what our product is.

Dave Mackenzie: One of the things I'm excited about the possibilities that unlocks. There's loads of use cases. And when I think engineering, I often think that engineering projects are messy. They're complicated, they're big. There's all different kinds of information that have to come together to get to a design outcome. And as a result of that, I feel as though complexity, is going up. Things are getting harder to do. You require more Joes, so to speak, on any given project. I was just wondered if you had a view on how generative AI might augment that complexity and help create space for better solutions or better engineering design outcomes?

Andriy Mulyar: It all starts at the data level. So what does it mean to have another Joe at the company? Right. It means somebody who's been an experience, who knows the processes, who knows the types of actions you should take when you get a new PDF from some new data source, what things to look at, what things to read. These are the kind of domain knowledge that that person brings that allows whatever process they're working on to accelerate. Understanding large quantities of data that might be in a form that normally is really, really hard to dig up without having the domain expertise and knowing where to look. That's just like problem number one, any time you have some sort of system that you're trying to automate or some sort of process that you're trying to automate, that automation is a function of building some sort of data driven tool that allows you to replicate those Joes. The problem with this is that most organisations, they just have a really, really bad setup for understanding and managing their data sets and managing all their disparate sources of data. You might have a bunch of Microsoft subscriptions with the data stored in a bunch of different places. This is knowledge that Joe has in his mind. How do you replicate more Joes? Well you start out by replicating the set of information that created Joe. With that data, then you can start thinking about, hey, how do I go about automating this sort of thing? And maybe that's a solution for generative AI. But if you don't start at the raw data, just having a fancy model that outputs things, is not going to get you there.

Dave Mackenzie: One of the things that's always exciting for us and our audience is to learn about other AI use cases outside of our field in engineering. Are there other technical organisations that Nomic and yourself have exposure to and what are the kinds of problems that they're looking for or using this technology to solve?

Andriy Mulyar: One concrete example I can give you is, one of our customers, they basically run a business of going in and mapping a bunch of consumer products to government-issued code. So, for example, maybe you go to a pharmacy and you pick up this new product, and that product might be behind the shelf. So it might be prescription only, or it might be in the front of the pharmacy. You can just pick it up and walk out. Turns out to actually make that decision, you have to go consult this giant hierarchy of government information about regulatory requirements. They automate this process. They use our product Atlas extensively as their whole data stack to be able to do this. Basically, if you're building machine learning powered systems, what you're really doing is iterating on the data quality, iterating on the quality of the data that you have, iterating on the quantity and the ability for people in your organisation who are increasingly non-technical to be able to work with that data. And that's the core problem that Atlas solves. Being able to allow people to interact with high dimensional data, map it to real business concepts that you have, and democratise that from just the engineering side.

Dave Mackenzie: One of the things I was just thinking about as you were speaking is I often work with grads or I'm talking to young people entering our organisation and also people that are in our organisation now, and I think with generative AI idea that they see it as a sophisticated automation, and I think of it as a way to amplify what we do. But there is also an element of fear around what does that mean for how I work now and how will I work into the future? And I was just wondering if you had any views on the kinds of skills that people should be considering, as they come into technical fields in light of generative AI?

Andriy Mulyar: First, I want to give an analogy for this kind of similar discourse that was happening actually in the field of radiology, back in 2016, 2017, there was all this talk from these AI experts around, you know, there will be no radiologists in five years. And if you're doing med school right now, you should stop studying radiology, because this is a very high paid profession. It takes a lot of years to become an expert in. And the whole thing was AI was going to automate this. Turns out they're still radiologists. It turns out what actually happened is the radiologists that didn't adopt AI were beat out by the radiologists that adopted AI. There's still radiologists. The ones that didn’t adopt AI are still there, the ones who did adopt AI are still there, but the ones that did are much further ahead in terms of their efficiency, in terms of quality, quality of life, being able to enjoy their jobs because they're able to augment themselves and be able to be more effective in the parts where the job that they actually need to be effective, this is the exact analogy you should use when you're thinking about what happens in like a post ChatGPT world where everyone has access to these sort of chat bots that can serve as little sidekicks. You should think about these things as augmentations to your workflow. You shouldn't think of these things as threats. If you do, you will lose because there will be other people who are, maybe less educated than you or maybe have bigger roadblocks to achieving the same kind of things you can, who embrace the technology and are augmented past you because of it. So it's really the kind of thing where, if you're an expert, you know you should be using this because it's gonna make you more of an expert. And if you think this is a technology that's blocking you, you should really rethink how you think about your technology.

Dave Mackenzie: Andriy, one of the things I think about often, when I think generative AI is I sort of go, how can this help us get to a better place in the future? And I just have this general view that complexity is increasing. I think sustainability, our energy transition and climate change, these are really complex problems that require a drastic amount of collaboration amongst experts on a scale we haven't seen before to really move the needle. And I think one of the ways that we could move the needle is through using generative AI to create access to information in a way that we haven't had it before. One of the examples I've been playing with is how could we have every technical drawing ever created for any given project be accessible through AI, so that not only does AI now understand a report that's been written about a project, it actually understands the technical details, the design intent is baked into a response. And I think that would be quite transformative. And I was just curious what your thoughts are on, how we can continue to transform the future and how we can get to that place.

Andriy Mulyar: The past two years, we've seen the adoption of massive scale of these systems that can take in text and generate out text in manners that are extremely useful for wide numbers of industries. Where we're going next is multimodal. The signals around us, you know, you step outside, you're not just listening to people talk or just reading things. You're sensory inputs are images, they’re frames coming into your eyes and your brain being able to process them and react to them. That's also the modality of data that exists in a lot of technical domains. How do you build a bridge? You build a bridge with a bunch of blueprints and schematics with textual descriptions associated with that. Being able to work with that type of data and being able to use that type of data as part of some sort of automated decision-making progress before the kind of tools and systems that emerge out of this most recent generative AI revolution was just impossible. Computers couldn't do that. And they can do that now.

Dave Mackenzie: For many people, generative AI is just text. It's just a chat bot. And I think the sooner we get away from that thinking and recognise that it's going to be text, but it's going to be images and videos and other kinds of media, and that is what is multimodal. The sooner we can move to solving real problems. My great fear is that we have this amazing technology, and all we ever do is use it to make social media better rather than solving amazing problems.

Andriy Mulyar: Vision is a modality that I'm extremely excited in. I think it'll be crucial to really leveraging the power of this technology for everyone.

Dave Mackenzie: Absolutely. Andre, thank you so much for joining us on our podcast today. It's been a fascinating conversation. Thank you for taking the time.

Andriy Mulyar: Dave, this is great. Thank you.

Maria Rampa: We hope you enjoyed this episode of Engineering Reimagined.

AI is a game changer for industries and workplaces around the world to empower workers and augment existing capabilities.

If you enjoyed this episode, hit subscribe on Apple or Spotify and don’t forget to follow Aurecon on your favourite social media platform to stay up to date and join the conversation.

Until next time, thanks for listening.

Apple badge  Spotify badge

Increasing AI's accessibility and value

Are you an early adopter of AI? Or do you see it as a threat?

We all know the winners of AI will be likely those who embrace the technology early on but it’s not without challenges. The accuracy of AI responses and the transparency and reliability of training data are key issues for users.

In this episode of Engineering Reimagined we explore how AI can empower workers by augmenting existing capabilities, how emerging AI technologies are being adopted within industries to increase workflow, and what causes an AI ‘hallucination’.

Dave Mackenzie, Managing Principal, Digital at Aurecon speaks with Andriy Mulyar, Founder and Chief Technology Officer at Nomic. They discuss how generative AI programs source and collate their responses to ensure accuracy and avoid bias, and why AI technologies should be considered an asset.

“You should think about these things as augmentations to your workflow. You shouldn't think of these things as threats. If you do, you will lose because there will be other people who are, maybe less educated than you or maybe have bigger roadblocks to achieving the same kind of things you can, who embrace the technology and are augmented past you because of it,” said Andriy.

Meet our guests

Learn more about Dave Mackenzie and Andriy Mulyar.
Dave Mackenzie |  Managing Principal, Digital, Aurecon

Dave Mackenzie

Managing Principal, Digital, Aurecon

With 18 years of experience in digital strategy, artificial intelligence, machine learning, visualisation, software development, and agile project delivery, Dave leads Aurecon’s digital transformation initiatives.

Andriy Mulyar | Founder and Chief Technology Officer, Nomic

Andriy Mulyar

Founder and Chief Technology Officer, Nomic

Andriy cares about making AI systems, and the data they are trained on, to be more inclusive and accessible to everyone. Prior to Nomic, Andriy was an early engineer at RadAI where he trained multi-billion parameter large language models to assist radiologists.

You may also like...

Enjoying our podcast?

Subscribe to Engineering Reimagined | Aurecon podcast
Leave a review for Engineering Reimagined | Aurecon podcast

Apple badge Google badge Spotify badge

Aurecon Podcast Engineering Reimagined
To top

Unfortunately, you are using a web browser that Aurecon does not support.

Please change your browser to one of the options below to improve your experience.

Supported browsers: