Season 2: Episode #6

The Future of AI and AWS

AI is much more than ChatGPT — and this week, we’re diving into all things AWS and Artificial Intelligence. Join us as we go behind the scenes (and beyond the hype) with an exclusive interview with Ankur Mehrotra, general manager at AWS Machine Learning. We’ll also speak with analyst Benedict Evans about the bigger picture around AI and the future of business.

Ankur Mehrotra

Guest

Ankur Mehrotra

General manager at AWS Machine Learning

Read Bio
Benedict Evans

Guest

Benedict Evans

Consultant and mobile analyst and pundit

Read Bio
Ankur Mehrotra

Ankur Mehrotra

General manager at AWS Machine Learning

Benedict Evans

Benedict Evans

Consultant and mobile analyst and pundit

Transcript

Hilary Doyle: CodeWhisperer, SageMaker, Rekognition, CodeGuru, CloudFormation, AWS has really turned up the volume on machine learning services to build, deploy, and run applications in the cloud.

Rahul Subramaniam: You’d think that we don’t really need developers anymore, wouldn’t you? That AI can just write all our code and run all our services?

Hilary Doyle: Not for me, Rahul. Our developers are irreplaceable.

Rahul Subramaniam: Well, that’s good because that’s not where we are at with AI. AI’s having a dramatic impact on everything that we do. But in this episode, we are going to go behind the scenes at AWS and beyond the hype to figure out just what’s going on.

Hilary Doyle: Lifting the curtain. Let’s get to it.

This is AWS Insiders, an original podcast from CloudFix, bringing you what you need to know about AWS through the people and the companies that know it best. CloudFix is the nonstop automated way to find and implement AWS-recommended savings opportunities. It never stops. I’m Hilary Doyle. I’m the Co-founder of Wealthie Works Daily.

Rahul Subramaniam: And I’m Rahul Subramaniam. I’m the Founder and CEO at CloudFix.

Hilary Doyle: There is no avoiding the frenzy around AI right now, but we want to understand what it all means for folks like us who use AWS.

Rahul Subramaniam: Okay. I’d like to split the AI frenzy into two periods. Let’s call it the pre-GPT and the post-GPT era.

Hilary Doyle: Iconic eras, both.

Rahul Subramaniam: Absolutely.

Hilary Doyle: Yeah.

Rahul Subramaniam: And without a doubt, AWS ruled the pre-GPT era. Their focus on building out the most comprehensive pipeline and infrastructure for machine learning for over a decade is evident in the dozens of services that you see in the ML and SageMaker catalog. Right?

Hilary Doyle: Mm-hmm.

Rahul Subramaniam: Now, the post-GPT era is where we are at right now, and honestly, I have to admit that I haven’t seen a more exciting time in technology during my entire career.

Hilary Doyle: Wow.

Rahul Subramaniam: Everyone is trying to figure out how to build new solutions that can leverage these magical capabilities of generative AI models and stay ahead of the race. Having spent quite a lot of time with AWS teams in Seattle over the last few weeks, I actually feel really excited about AWS’s focus on not trying to invent the end use case, but instead, to ensure that everyone has the infrastructure and the pipelines to build whatever they want. This vision is quite different from what Microsoft and Google appear to be chasing right now.

Hilary Doyle: Yeah. And it speaks to what AWS has always done, which is, build it and they will come and teach you how to use it.

Rahul Subramaniam: Exactly.

Hilary Doyle: Okay. We’re going to get into all of this, but first, we’ve got your AWS News Headlines. 

Well, speaking of AI and AWS, that is what our first story’s about. There’s now a preview release of Amazon CodeGuru Security. That’s a tool that uses machine learning to help identify code vulnerabilities and provide guidance. This is important. Rahul, is this going to make coding AIs more reliable?

Rahul Subramaniam: Yeah, so I love Amazon CodeGuru. Think of it as a tool that can analyze your code and tell you about what you are doing wrong. Now, CodeGuru used to specialize in performance-related analysis earlier, and now it includes a whole bunch of security fixes that it can just recommend out of the box. They’ve trained an ML on all the code that AWS has written and of course, a whole lot more. If someone hasn’t run CodeGuru on their code yet, they should do that right away.

Hilary Doyle: Our guru has spoken. Okay. Next, we have a couple of stories about EC2. First off, thanks to Amazon EC2 Instance Connect endpoint, folks will now have SSH and RDP connectivity to their EC2 Instances without using public IP addresses. It is hilarious. There are so many letters. Rahul, why is this such a big deal?

Rahul Subramaniam: Okay. Hilary, have you heard of something called a jump box?

Hilary Doyle: I have not, but it sounds great and very active. Yeah.

Rahul Subramaniam: It sounds funny – but when you have machines that are behind a network firewall and inside a VPC, accessing them for debugging or maintenance has always been a real pain. You either had to add a public IP address to these machines, which made the whole point of network isolation moot, or you used publicly accessible jump boxes to access these servers that were behind the network.

Now, long story short, it has always been a very, very messy process and has been the root of a whole bunch of security bad practices. For the first time, AWS takes care of all this messiness and gives you a clean way to access these machines without punching security holes in your deployment. It’s a big deal.

Hilary Doyle: That does sound like a big deal. All right. We are not going to leave the world of EC2 yet, nor are we going to stop talking in letters and numbers. Do your best to keep up. AWS just announced a preview of M7a Instances designed to deliver the best X86 performance and price performance within the Amazon EC2 general purpose family. Rahul, are we right that this is all thanks to the fact that M7a Instances are powered by fourth-generation AMD EPYC processors?

Rahul Subramaniam: Yes, absolutely. AMD is winning the X86 performance races. They just announced the latest generation of processors that are now available on the AWS platform. Now, why this announcement is important is that a large majority of EC2 instances are still on the older generations. While the latest generations are not just more performant, but also cheaper. And unlike your data centers, it fails to constantly move your deployments to the latest generation. It’s a big mind shift change for those people who are coming from the on-prem world.

Hilary Doyle: We like a good mind shift. That was a real mouthful this week, but that’s it for our AWS Headlines.

Okay. Let’s get back to AI and AWS.

Rahul Subramaniam: You had an exclusive sit down with Ankur Mehrotra, General Manager at AWS Machine Learning, to get a glimpse of what’s happening behind the scenes.

Hilary Doyle: That is right. We will get to that interview. But first, we wanted to set the scene with a conversation about AI and the business world.

Rahul Subramaniam: Joining us now is Benedict Evans. For the past two decades, Benedict has been analyzing mobile media and technology and is currently focused on all things AI.

Hilary Doyle: Yes. He is the author of Benedict’s Newsletter, to which you can subscribe. Benedict Evans, welcome to the show.

Benedict Evans: Thank you for having me.

Hilary Doyle: I run a young digital company and I’m hoping that you can help me scenario plan, because certainly – in the context of business right now – everyone is trying to understand how to be ready for the future while the future is still happening. What roles would you say business leaders could reasonably expect to hand off to AI in the next two to three years?

Benedict Evans: I made a joke on Twitter a while ago that when people in tech say five to 10 years, they mean never. And when people say never, they mean five to 10 years. There’s this delta of how long it takes and certainly, you talk to excited 22-year-olds in Silicon Valley and they think that all law firms are going to disappear by the end of the year. The reality is like, let me explain what an RFP is.

Hilary Doyle: I’ll talk to you in 10 years. Yeah.

Benedict Evans: Let me explain how long it takes. Let me explain what an enterprise sales cycle looks like. Part of the reason that cloud is moving slowly is, you’ve got all this stuff and it works and it’s really depreciated. Now, you want me to move from infrastructure to OpEx, so that’s going to push my earnings down. I’ve already bought all this stuff and depreciated it, so I’m going to have to take a write-down. I said there’s a lot of organizational stuff, and we’ve got other things we need to build this year. We’re not completely changing our platform. It takes time for organizations actually to deploy this kind of stuff, even if it’s clearly ready.

Hilary Doyle: Do you think it’s clearly ready?

Benedict Evans: It’s funny. I think a lot of people I’ve seen coming up with the same question, which is, trying to use this for general search is the only thing you can’t use it for. It’s the only thing that doesn’t work. There was a brief moment when Microsoft was saying that you could use this for web search. Not only can you not use it for web search, that’s trivial relative to everything else you can build with it. Although yes, search advertising is whatever it is, $200 or $300 billion. But no, it’s everything else that you use this for, and the core of that is that you have this word bullshitting. Can we say bullshitting on this podcast? You can bleep it out, whatever.

Hilary Doyle: You sure can.

Rahul Subramaniam: Absolutely.

Hilary Doyle: Welcome to it.

Benedict Evans: You have this word, people describe this as what these models are doing is bullshitting. I’ll give you an example for anyone who doesn’t know what I’m talking about, and then I’ll explain why that’s a bad way of thinking about it. Go and get it to do a biography of you. If I get it to write a biography of me, because that’s obviously something that I know a lot about, it will say, “Benedict Evans is a world-renowned and hugely influential thought leader,” which is obviously correct.

And then it will say, “Went to university at Oxford.” No, reload. Cambridge. “First job was in investment banking.” Correct. “At DKW.” No. Hit reload. No. “First job was at McKinsey.” Hit Reload. “No, first job was a journalist at The Guardian.” Oh, also I did a degree at the LSE. No, I didn’t. Keep going. What it’s doing is it’s producing an extremely accurate version of what a biography of someone like me tends to look like.

Hilary Doyle: Your shadow self who went to Oxford. Yeah.

Benedict Evans: It’s not trying to make a biography of me. It’s not doing a database look up. It’s saying what would answers to questions like that. It’s making patterns, which, of course, is in principle what all of these systems are doing. It’s making a pattern of what that thing would look like. And the question is, where does it matter? Where can you see the error and where does the error matter?

Rahul Subramaniam: Benedict, in terms of what you’re seeing around you right now and all of the thought leaders who are expounding about AI and the future, what are some of the misconceptions or conclusions that you feel people are drawing from this moment specifically as it relates to business?

Benedict Evans: People tend to make predictions about the wrong things. The classic observation is that 1950 sci-fi is full of-

Hilary Doyle: AGI.

Benedict Evans: Well, they’re full of interstellar rocket ships that have paper star charts, and you have to queue up to by a ticket, and they’re piloted by people. Obviously, also they’re piloted by men, of course, but they have pilots. Whereas today, the idea that an interstellar rocket ship is going to have a pilot is self-evidently absurd, and so you’re predicting the wrong thing. If we think about what is it that’s going to get automated now, and I go, what kind of tangible mental models to think about this? One of the ways I used to describe machine learning was that this gives you infinite interns.

You want to listen to every single call coming into your call center and tell me if the customer’s angry, or tell me if the service agent is rude. Well, you don’t need an expert to do that. A 10-year-old could do that. Somebody said AI can probably do anything you could train a dog to do, but you don’t have enough dogs. You don’t have enough interns to listen to every single call coming into JPMC’s call center. With machine learning, you can. You could just de-center into analysis on every single call. That doesn’t automate away everybody else at JPMC. In fact, it doesn’t really automate away the people in the call center.

Over time, it will up to a point, or maybe it changes what it is that they’re doing. Now, what this stuff is doing is, again, back to the conversations we were having earlier about the error rate, write me a sales email. There’s a sales email, okay, but now I’ve got to check it. Again, get the intern to go and do all the stuff, but you’re going to need to check everything that the intern does. It’s still valuable. If you had 10,000 interns who never got tired and did exactly what you told them to do, that would be great, but you’d still need to check what they did.

Hilary Doyle: Benedict, thanks for this illuminating conversation and also for your scientific definition for bullshit, which I particularly appreciated. We’ve really enjoyed having you.

Benedict Evans: Great. Thank you.

Rahul Subramaniam: Thank you so much for being on the show.

Benedict Evans: Sure. Thanks for inviting me.

Hilary Doyle: Before we hear from Ankur about AWS and AI, I am going to remind our listeners that Rahul sings, he plays a guitar, and he dives even deeper into AWS with one other show.

Rahul Subramaniam: Thanks, Hillary. Yes, every week, I’m joined by Stephen Barr for a live stream, breaking down the latest news from AWS and to offer our learnings and insights, along with some amazing guests from AWS who are with us both in the virtual studio, as well as the audience.

Hilary Doyle: It is called AWS Made Easy and it genuinely does what it says. You can find out more about it at cloudfix.com/livestream. Ask your questions live on the show and Rahul and his guests will answer them, as the livestream suggests, also live. What a time to be alive.

This week, I had the opportunity to sit down with Ankur Mehrotra, General Manager at AWS. We are going to play some of the excerpts from that interview. And Rahul, I’m looking forward to hearing your immediate hot takes.

Rahul Subramaniam: Absolutely. I can’t wait.

Hilary Doyle: Great.

Rahul Subramaniam: Knowing Ankur, I feel very confident that you will hear more about how AWS customers rely on AWS to build amazing solutions rather than the bleeding edge AI model that AWS is building right now. But again, curious to hear what he said.

Hilary Doyle: Well, we started by talking about where AWS is at with AI, and Ankur shared that they see a need for solutions across three layers.

Ankur Mehrotra: One is the infrastructure layer, which is where AI research organizations and companies and startups who are building these generative AI models. This requires large scale training capabilities and capabilities to really optimize these models. For that, we’ve got partnerships with generative AI startups such as Stability AI, Hugging Face, AI21, Anthropic and others who are building their models on AWS. The second layer on top is basically for enterprises and other customers who don’t want to build these models themselves, but actually want to take existing, pre-trained generative AI models and then fine tune them and then deploy them for their applications.

That is where we really think that no one model is ready or will be ready in the foreseeable future to address all their needs, which is why we’ve recently announced Amazon Bedrock, which is a managed service that provides our customers access to leading third-party models from our top providers and research organizations such as Stability AI, AI21 Labs and Anthropic, but also offers our Amazon’s first-party models, which are the Titan models. The third layer is basically applications, which is, these are end-to-end applications that are powered by generative AI.

This is where we recently announced the general availability of Amazon CodeWhisperer, which you can think of as an AI-powered coding companion, which uses some of the generative AI foundation models that we’ve built in-house at Amazon. CodeWhisperer is integrated with the common IDEs that you can use to auto generate code. That’s basically our strategy. It’s really driven by customer need. We are working backwards from the customer to look at like, “Hey, how can we address unmet needs in the segment?” That’s how we are approaching it.

Rahul Subramaniam: It’s important to understand that AWS has always thought of themselves as the plumbing for the online world. They are very, very good at understanding what customers want and then breaking down the vast footprint of what needs to be built into slices that can be built by their very own famous two-pizza teams. There are companies out there that focus on one thing and one thing alone, like building new foundational AI models, and often get very good at it. But when it comes to AWS, they see the value in stitching together all of these different pieces to create even greater overall impact.

In this particular case, they’ve been at work with the SageMaker ecosystem for years and that has enabled customers to do everything from data gathering, data cleansing, analysis, training the models, deploying them for inference, and last but not the least, enabling them in whatever real world application you want to use that AI in. That grand vision, with those deep insights on what it really takes to build a good and effective AI solution, is why I’ve been all in on AWS.

Hilary Doyle: Your AWS origin story. Well, one of AWS’s origin stories is the much loved Mechanical Turk. In the early days of AI, Mechanical Turk single-handedly did all of the data labeling that was required. But as Ankur explained, that has changed.

Ankur Mehrotra: That has also expanded now to Amazon SageMaker Ground Truth, which, even though the Mechanical Turk was not specific to machine learning, Ground Truth, which you can think of as a purpose-built tool that solves the same problem, that now offers not just the tools for data labeling, but also workforce as a service. There are organizations that want to label their data at scale for building machine learning models, but they don’t want to hire a workforce to do that data labeling. Ground Truth also offers that as a service.

Rahul Subramaniam: Mechanical Turk was incredibly helpful for labeling and the early real world enablement of AI, and that is now largely being replaced by synthetic modeling.

Hilary Doyle: Synthetic modeling is transformational. AI has obviously found its way into everything, healthcare, design. And since so much of the industry was artificial already, the financial sector.

Ankur Mehrotra: Bloomberg, for example, they recently created a model called BloombergGPT, which is a generative AI model for the finance industry. They built it on SageMaker and it’s been a great partnership, but it’s also an example of how really we are just starting to see more vertical-specific applications of this technology.

Hilary Doyle: I love watching to see how news agencies are adopting AI and how quickly they’re moving. Axios, for example, integrated GPT into their CRM platform. Bloomberg has always been tech-first, so it’s no surprise to see them customizing for an early adoption.

Rahul Subramaniam: Yeah. In the short life that GPT has had in the public domain so far, it is starting to become very apparent that there is a need for vertical-specific models. LLMs aren’t very good at getting all the facts right, especially the generic ones. And if you want to teach it domain-specific concepts and facts so that you can use them in a particular application, you need to build vertical-specific models.

Hilary Doyle: We’ve talked about the cost of these LLMs. It is egregious, both commercially and environmentally, and so I wanted to know how AWS views competition. Because obviously, they’ve been in this space for over 20 years. They’ve built out this massive language model and it’s the companies that are making use of it, like Stability AI, you have just waltzed onto the scene and grabbed all the glory. My question for Ankur really was about sour grapes. Does AWS feel that they’re being overlooked in this conversation? Unsurprisingly, he does not see it that way.

Ankur Mehrotra: I think we’ve got a great partnership with Stability AI and many other research organizations, and we’re really proud of the work that they’ve done on AWS, the models that they’ve built on our infrastructure. We do have our own research efforts in the space. We just announced Titan, a series of models as part of Bedrock, but at the same time, we don’t think that one model or one type of models will really be able to address all needs of customers. We really believe in giving choice to our customers, so we are actually super excited about our ongoing partnerships in this space. I think you’re going to see us continuing to partner more and offer greater choice to our customers with more models that are not built by Amazon.

Hilary Doyle: Customer obsession, it is no lie from the folks at AWS.

Rahul Subramaniam: Yeah. Over a decade ago, AWS laid out its vision where they said that they wanted to be the plumbing for the internet. They wanted to be the platform that everyone could use to build amazing stuff. While the conversation in the public is about whether AWS is losing the AI game, they’ve been busy building one of the most comprehensive AI platforms out there, and eventually, everyone is going to be using that. If you go to the AWS website and look at the number of services under the machine learning category, you’ll get a sense of what I’m talking about.

Hilary Doyle: AWS is definitely one of the world’s greatest plumbers. Five stars for its home visits. Ankur and I spoke about coding AIs, still in their infancy, but CodeWhisperer has, obviously, entered the ring. I wanted to know how far along AI-driven code really is. How far away are we from totally reliable coding AIs? At the moment, everything feels a little work in progress. So, how should we be thinking about deploying this software in its current state?

Ankur Mehrotra: Well, I think we’ve been really excited to see the initial adoption with CodeWhisperer. Adoption has far surpassed our expectations. During the CodeWhisperer preview, Amazon ran a productivity challenge and we saw that the participants were 27 per cent more likely to complete their tasks successfully when they were using CodeWhisperer. And they were able to complete those tasks 57 per cent faster than other participants. This is not really a cool demo or a tool. This is actually something that is helping developers be more productive. I really believe in the adoption of AI-based coding continuing to grow, and I think CodeWhisperer has shown us that this is possible.

Rahul Subramaniam: One of the conversations that I see out there is that AI is going to replace people and do all of their jobs. That’s a scenario for the future and somewhat of a North Star for AI. The reality is that AI as a peer or a buddy, or even as an assistant, can even today improve productivity significantly. As Ankur just talked about this, CodeWhisperer is an excellent example that proves that point.

Hilary Doyle: As you said, we are very focused on how AI is making life easier and yet, people make mistakes and AI continues to make mistakes right there with them. I asked Ankur about some of the common mistakes AWS customers are making now.

Ankur Mehrotra: We see a lot of customers get stuck at the data stage in the machine learning life cycle. So collecting, organizing, and prepping their data. The first one is really to think about the data strategy and making sure that the customers are thinking of using purpose-built tools to organize and prep their data for machine learning. This is where we’ve built tools such as Amazon SageMaker Data Wrangler.

The second one is, which is broader, which we internally refer to as ML industrialization. Industrialization is really about automating and scaling. If you think of the automotive manufacturing industry was able to scale when they really standardized on the assembly line.

Rahul Subramaniam: Correct.

Ankur Mehrotra: Right? Not just scale, but scale reliably. I think the same analogy applies to machine learning as well, where we think that use of bespoke tools is going to slow them down and they won’t be able to scale reliably. That is where end-to-end machine learning services such as SageMaker really come into the picture, which offers purpose-built tools for training, deploying and managing machine learning models.

And then the last thing would be also to think about the responsible AI aspect of machine learning. Because machine learning is more and more being applied to critical use cases and customer-facing use cases. That is where we’ve built tools such as SageMaker Clarify, which can really help you understand the, let’s say, bias or feature attribution drift in a model that you’ve built. And then also with tools like SageMaker Model Monitor, which helps you stay on top of any kind of changes you may see in the data or the quality of the model once it’s deployed in production.

Rahul Subramaniam: From that entire snippet, industrialization of AI is what’s stuck in my head. And I completely agree. Like I said before, AWS is thinking about the AI factory while everyone else is thinking about that one particular application of AI.

Hilary Doyle: Now, we’re a show about cost optimization, so we were hardly going to miss an opportunity to discuss cost efficiencies with the Great Oz himself. Ankur had some helpful tips for cost optimization across hardware and software configurations. Here’s some of what he had to say.

Ankur Mehrotra: There are many aspects of this. One is, with all our services, we think about how we can offer a pay-as-you-go pricing model, and that is true with SageMaker and many other services. But at the same time, we are also investing in many other kinds of optimizations. As an example, AWS built our custom silicon – our own chips – from machine learning, which are offered through EC2 Instances such as Inferentia and Trainium. We’ve also invested in things such as distributed training capabilities within SageMaker, where customers are able to achieve better price performance compared to other alternative solutions when they’re training, let’s say, a large model, which can really be expensive and can require a lot of compute resources.

Overall, our approach to helping customers be more cost-efficient when they’re doing machine learning is really anchored on our overall pricing approach. We don’t lock customers into these long contracts, but also the kind of cost optimizations we are bringing to customers in terms of purpose-built hardware, as well as software optimizations.

Rahul Subramaniam: All the custom hardware, along with the curation of the other AI tools, you can almost visualize what this AI factory looks like.

Hilary Doyle: Rahul, what will you take away from these conversations with Benedict and Ankur?

Rahul Subramaniam: Okay. Here are my three takeaways. While GPTX, let’s just call it, or whatever version it is, and the LLMs are great, the underlying insight is that we are at a stage where we need to rethink how we do everything. Take the example of universal education. Salman Khan of Khan Academy just did a TED Talk where he talks about how the future of education isn’t a teacher teaching content to a class full of students, but giving away a personal AI tutor that can cause a two-sigma improvement in their learning.

Second, AWS is building some of the most impressive foundations for a future in AI. It is imperative that organizations learn to use them effectively. And third, the cost of AI isn’t cheap.

Hilary Doyle: Testify.

Rahul Subramaniam: Everyone needs to figure out quickly how to build AI tools and applications in the most cost-efficient way possible. And I don’t see how you can even start to do that if you aren’t on a platform like AWS.

Hilary Doyle: As always, we are keen to hear your thoughts on AI, your ChatGPT’ed sheets and all things AWS. Sometimes, we even send out prizes for good questions, so hit us up at podcast@cloudfix.com. We haven’t actually ever talked about prizes, but I feel like we’re there.

Rahul Subramaniam: Absolutely. Please leave us a review, and don’t forget to follow this show to get the new episodes as soon as they’re released.

Hilary Doyle: If a sentient robot doesn’t get to you first. AWS Insiders is brought to you by CloudFix and AWS cost optimization tool. You can learn more about them, and you should, at cloudfix.com.

Rahul Subramaniam: Thanks, everyone, for listening. Bye-bye.

Meet your hosts

Rahul Subramaniam

Rahul Subramaniam

Host

Rahul is the Founder and CEO of CloudFix. Over the course of his career, Rahul has acquired and transformed 140+ software products in the last 13 years. More recently, he has launched revolutionary products such as CloudFix and DevFlows, which transform how users build, manage, and optimize in the public cloud.

Hilary Doyle

Hilary Doyle

Host

Hilary Doyle is the co-founder of Wealthie Works Daily, an investment platform and financial literacy-based media company for kids and families launching in 2022/23. She is a former print journalist, business broadcaster, and television writer and series developer working with CBC, BNN, CTV, CTV NewsChannel, CBC Radio, W Network, Sportsnet, TVA, and ESPN. Hilary is also a former Second City actor, and founder of CANADA’S CAMPFIRE, a national storytelling initiative.

Rahul Subramaniam

Rahul Subramaniam

Host

Rahul is the Founder and CEO of CloudFix. Over the course of his career, Rahul has acquired and transformed 140+ software products in the last 13 years. More recently, he has launched revolutionary products such as CloudFix and DevFlows, which transform how users build, manage, and optimize in the public cloud.

Hilary Doyle

Hilary Doyle

Host

Hilary Doyle is the co-founder of Wealthie Works Daily, an investment platform and financial literacy-based media company for kids and families launching in 2022/23. She is a former print journalist, business broadcaster, and television writer and series developer working with CBC, BNN, CTV, CTV NewsChannel, CBC Radio, W Network, Sportsnet, TVA, and ESPN. Hilary is also a former Second City actor, and founder of CANADA’S CAMPFIRE, a national storytelling initiative.