Artwork

Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer, Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer, Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

#257: Analyst Use Cases for Generative AI

1:05:35
 
공유
 

Manage episode 447484101 series 2448803
Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer, Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer, Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Judging by the number of inbound pitches we get from PR firms, AI is absolutely going to replace most of the work of the analyst some time in the next few weeks. It’s just a matter of time until some startup gets enough market traction to make that happen (business tip: niche podcasts are likely not a productive path to market dominance, no matter what Claude from Marketing says). We’re skeptical. But that doesn’t mean we don’t think there are a lot of useful applications of generative AI for the analyst. We do! As Moe posited in this episode, one useful analogy is that thinking of using generative AI effectively is like getting a marketer effectively using MMM when they’ve been living in an MTA world (it’s more nuanced and complicated). Our guest (NOT from a PR firm solicitation!), Martin Broadhurst, agreed: it’s dicey to fully embrace generative AI without some understanding of what it’s actually doing. Things got a little spicy, but no humans or AI were harmed in the making of the episode.

Links to Resources Mentioned in the Show

Photo by Barbara Zandoval on Unsplash

Episode Transcript

[music]

0:00:05.8 Announcer: Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language.

0:00:14.8 Michael Helbling: Hi everybody, welcome. It’s the Analytics Power Hour. This is episode 257. You know, since the Industrial Revolution, it seems like the interest in automation is always around. And in the analytics space, there’s always a lot of interest here as well. You know, that entails handing off parts of the work to a machine, to increase efficiency. These days, AI is the newest entrant into this discussion. How and what can we hand off to an AI when it comes to analytics? Are they gonna take our jobs? Will it truly usher in an era of data democratization? I don’t know. I guess we should talk about it. And to do that, let me introduce my co-hosts, Moe Kiss, how are you going?

0:01:00.4 Moe Kiss: I’m going great, thanks for having me, Michael.

0:01:00.9 MH: It’s awesome. And Tim Wilson, some would say you’re already a computer already. Your results are too perfect. Now, how you doing, Tim?

[laughter]

0:01:10.9 Tim Wilson: Ouch. I’m getting to where I’m a computer when it comes to responding to a podcast pitch about… Pitches about generative AI for analytics.

0:01:21.1 MH: There you go. That’s a part of the job that’s…

[overlapping conversation]

0:01:25.6 TW: They’re flowing in fast and furious and…

0:01:26.0 MH: Fairly automated.

0:01:29.3 TW: Reached out to Martin ’cause we’re like, “How about we go with someone who we reached out to instead of somebody who came in to us?”

0:01:34.9 MH: Yeah, a lot of interest in this, and I’m Michael Helbling and we did wanna bring on a guest who is at the forefront of this issue and luckily at Marketing Analytics Summit this year we met Martin Broadhurst. He’s a consultant on AI for marketing, the owner of Broadhurst Digital, and he serves on the editorial board of the Journal of Applied Marketing Analytics, and today he is our guest. Welcome to the show Martin.

0:01:56.8 Martin Broadhurst: Hello Michael. Hello Moe. Hello Tim. [laughter]

0:02:00.2 MH: All right. Well, we’ve got a lot of questions. So buckle up and in the next hour or so, hopefully we’ll learn a lot about what AI can do for us in analytics, or what it can’t.

0:02:10.3 MK: I’m not gonna lie. I’m like weirdly scared of this episode and it has been on my mind a lot.

0:02:15.2 TW: What? Why?

0:02:16.5 MH: All right. Well, let’s dig into that. Maybe this is just a… Martin what we need is a reassurance for all of us that we’ll still have jobs after this or something [laughter]

0:02:26.1 MB: I don’t see anybody’s job going anywhere in a hurry, not to spoil what’s to come. But yeah, I think you’re okay for the time being.

0:02:35.9 MH: Yeah. Well, maybe Martin, to kick this whole thing off, we can talk a little just about how you got into this area in the first place and sort of some of the things you’re seeing in the industry right now.

0:02:45.6 MB: Yeah. So my background is in the CRM and marketing automation space. This is where I’ve been working with businesses for years now. And when OpenAI made the GPT-3 API available. I immediately started playing around with it and experimenting with the different tools, seeing what the capabilities were and understanding the mechanisms of how these large language models actually worked to try to kind of push them to the limits. And over time, I’ve just built up a lot more experience with that. And yeah, this has turned into a nice general addition to my skill set where I’m working with clients on how to automate and find use cases for AI and generative AI in their workflows and in their day-to-day tasks.

0:03:36.4 MB: And unsurprisingly, data analysis is something that comes up quite a bit. So, yeah, I’ve been trying to test the models as much as I can to see where the limits are before they break. And this month I’ve just published an article in… A journal about how to use large language models with spreadsheets with a bunch of different techniques for how to think about using generative AI alongside spreadsheet and spreadsheet design.

0:04:09.6 TW: I mean, it’s a short article. Don’t you just say, “Here’s the spreadsheet and find me insights,” and then it just goes from there? I mean…

0:04:17.6 MB: That is the dream, isn’t it? Wouldn’t it be great if that actually worked like the marketing spiel?

[laughter]

0:04:26.6 MK: Well, okay, the fear, the fear I have at the moment is actually not about losing my job because I see the amazing efficiencies I even already have in my own job. What is terrifying me at the moment is the, “We wanna do AI.” We had a conversation the other day, “We wanna do GenAI on this thing.” And I get really… Let’s just say anxious. Let’s call a spade a spade. We’re kind of swapping the way we would normally solve a problem from what is the problem? What are all the ways to solve it? What is potentially the simplest, most explainable, whatever way to get there? Versus going, “We’re going to solve this problem with X. How do we do more of X?” And that’s the bit that’s stressing me out.

0:05:15.3 MB: Yeah. Finding the… Or prescribing AI first before you’ve even dug into the potential solutions, starting with that and saying, “We… ” And that’s actually one of the things that clients will sometimes say to me, they’ll just come to me and say, “We want to use AI.” It’s like, “Well, why would you start with that as the solution before you’ve looked at the implementation?” And I think this is a really common problem. I would always start with, look at those tasks that you do that have things that require certain amounts of batch work, where there’s just repetitive nature to the tasks and you can automate that away. But yeah, really understanding the nature of the problem is probably the starting point before you even get into what the solve is.

0:06:04.8 MH: But is there… Generative AI seems, it’s tangible ’cause it’s so easy for somebody to play with it, which you, that I would say there’s a higher bar for someone to just like dabble with SQL or Python or are coming from scratch. So it’s broadened the audience of people who can get a taste of what the technology is. And to me where the massive miss is just because you get a sense of what it does, you have a back and forth with ChatGPT, it kind of misses what analysis is. And like, it feels like there’s an oversimplification of the steps of saying, “Oh, well, no, no, just gonna be AI is gonna be the drudgery of the tasks.” And you say, “Well, the drudgery of my analysis work is doing this data cleanup. And I’ve played with this ChatGPT. So what if I just told it to do that.” But it kind of misses what the, even in the drudgery of the work, what the human component is, much less just the reality of identifying a problem where you’re trying to use data to solve it, like it just…

0:07:23.0 MH: It feels like it’s this big bucket of like a tool and somehow people are like, “Oh, well, the tool must be smart enough to get to how to fix it.” The fact that you said you’ve got the spreadsheets thing feels like even that is nuance because you have to kind of help it understand what a spreadsheet is, which maybe it kind of knows and then sort of what the data within it represents, right?

0:07:47.2 MB: I think the, what you’re kind of driving at this is that people don’t understand the tool and the nature of the tool and the kind of mechanism behind the tool. I think it’s really important that with generative AI people understand things like next token prediction. What does that mean? What is it doing under the hood? And when you’ve played with the models a bit and you understand some of the settings, things like temperature, for instance. So for anyone that isn’t aware of the temperature setting in a large language model, there is a setting between zero and two and the higher it is, the more chaotic the answers you are… And the basic principle of temperature is that it’s like in the physics systems, the higher the energy in a system, the more chaotic, and the lower the temperature the more controlled it is.

0:08:40.1 MB: If you play around with that in the API, for instance, you can get really consistent answers. But where you use something like ChatGPT, you don’t have access to that particular setting. So it’s generative, right? It’s not descriptive or calculating. It’s coming up with a range of answers and the subtleties in the way that you write, the way that you input, the way that the data might be structured, whatever it may be. And if people think it’s like computer software that they’ve always used in the past where you press a button and it always gives you this thing, it does this job consistently in the same way every time, they will be sadly mistaken because that’s not what’s going on under the hood.

[music]

0:09:27.0 MH: It’s time to step away from the show for a quick word about Piwik PRO. Tim, tell us about it.

0:09:32.9 MK: Well, Piwik PRO has really exploded in popularity and keeps adding new functionality.

0:09:38.6 MH: They sure have. They’ve got an easy-to-use interface, a full set of features with capabilities like custom reports, enhanced e-commerce tracking, and a customer data platform.

0:09:50.4 MK: We love running Piwik PRO’s free plan on the podcast website, but they also have a paid plan that adds scale and some additional features.

0:09:56.9 MH: Yeah, head over to PIWIK.pro and check them out for yourself. You can get started with their free plan. That’s PIWIK.pro. And now let’s get back to the show.

0:10:10.3 MK: Oh. I’ve just had this, I don’t know if this analogy makes sense Tim but hear me out, I constantly am doing this thing in my head where I’m trying to understand stakeholders’ perspectives and understanding an MMM, and how it’s different from their worldview of attribution and what attribution gave, which when you see a table and it goes: This channel this many sign ups, this channel this… Like the concept of MMM results is quite difficult. We start talking about diminishing return curves. We start talking about return on ad spend at different spend levels. And like, there’s just all this like complexity there. And I feel like a similar analogy could be made here, right? Like, you expect input output, but there’s actually so much nuance. Like, is that a… Is that like… I don’t know if I’m grasping at straws here, but in my mind, I was like, this would be the problem of people trying to take data analysis using GenAI without understanding it well enough. That would get you into the danger territory, right?

[overlapping conversation]

0:11:23.8 MB: I think that works on both… At two levels. There’s the not-understanding-the-GenAI-mechanism well enough, so not really understanding the strengths and the weaknesses of the tool. Which is going to be a hindrance in and of itself. But then there’s also that level of… People often say that if you use a large language model and you are an expert, you can get expert-level outputs from it. The better the quality of your input, the better the quality of the outputs. But if I, as someone who isn’t a seasoned data analyst, throws in a spreadsheet and says, “Give me some insight into this.” I’m asking bad questions and I’m getting very average outputs.

0:12:06.3 MB: So it works on both ends of the spectrum. If you’re not giving good context and good prompts, you’re going to get bad outputs. But also if you don’t understand the limits of the technology itself, you might just… Well you don’t know that it can’t actually do the thing you’re asking it to do.

0:12:22.5 MH: That’s… Cassie Kozyrkov last month wrote a post that was very timely as we were prepping for this episode. It was ‘Strawberry’s Paradox: When Perfect Answers Aren’t Enough’. And she sat with some Nobel Prize winner who she worked with and she was working on her PhD. And they have like just a rift for a conversation for a while. But what she… I thought very, very well articulated, put that in that said, “Imagine the AI that can give the perfect answer, that it is perfectly accurate and correct if you,” just as you said, Martin, “If you don’t ask it a good question, it’s,” you know, it is going to be like, “What’s the answer to life, the universe, and everything?” “It’s 42.” Right? It’s not a good question. And that’s this other piece that has kind of bothered me that it feels like we’re looking… The people who are looking at AI all have lived in a world without generative AI. So we’re bringing our human experience, having worked with data, having dealt with the business problems, having grappled with trying to explain multi-touch attribution versus MMM.

0:13:35.0 MH: And that’s the lens we’re looking through it at and saying, “Oh, here’s the future. It’s gonna take everything.” Well, if you fast forward and say, “Wait, that’s discounting the expert level of the input.” So even if that worked for a very, very short, for a period of time, that would start to go away because all of a sudden you’d have people who were trying to skip a bunch of steps of the human existence to get to the AI and hoping that the AI can close that gap, which seems very… I don’t know if that’s just like philosophical or it seems like, “No, that’s what would happen. It’s we’re counting on the tool to close a gap that doesn’t seem like the tool is ever gonna be equipped to fully close.” I don’t know if that made any sense. Moe, I’d really like your analogy. Or we can just cut this whole section out and… [laughter] I mean, where do you, with a spreadsheet, where are you using… What’s the start and end point of generative AI when given a spreadsheet?

0:14:49.5 MB: So I think some context has to be given there in that these models are changing rapidly. It was only a few weeks ago that we had GPT or ChatGPT o1 preview released, which is supposed to be, you know, much better at reasoning, although that’s its own conversation in and of itself. The models’ capabilities are changing all of the time. So in… What I propose is that there are, as it stands, four ways that you can really use ChatGPT or any large language model with a spreadsheet. And one is to… And my preferred route is to just use it as a coach or a mentor. It’s that very clever assistant that you’re not actually giving access to the data, but you are… You get stuck on something. Maybe you need a bit of code writing that you can stick in a macro, or you’ve forgotten the function to do a certain thing, or you’ve got a really long formula that you need optimizing and reducing.

0:15:45.0 MB: It will do all of that for you. And it’s the actual spreadsheet and the language model don’t interact. This is where AI is very strong at the moment. It can be quite good for that. There’s over 500 functions in Excel. Trying to keep all of those in your head is very difficult. Whereas if you’ve got that very smart assistant next to you, it can go, “Oh yeah, I know exactly what that is.”

0:16:11.3 MB: Then you’ve got the file ingestion. This is where you can give the spreadsheet to the model. So you can upload to ChatGPT the CSV, the Excel file, whatever it may be, and it can use Python in its code environment to execute tasks and functions on the data. The outputs from this can be very good. It can do some incredibly powerful things, but there comes a big flashing light warning sign saying the outputs can also be complete hallucinations. I have got lots of examples. In fact, nearly every single time I do this, the data that it presents back has some errors in it that if you’re not paying attention, you would not spot.

0:16:54.9 MB: So, case in point, from the Marketing Analytics Summit, I showed an example where we had a bar chart showing cohorts grouped by age. And there were two bars, satisfied or unsatisfied. And it was just, which one was higher? A blue and an orange bar. And it… In the written text, so in the charts that it creates, they are accurate. It seems that the data manipulation in the charts that it creates are accurate. But then its description of the charts, its written description of it is wrong. Like consistently, it would say, “You can see that for the 35 to 50 year-old cohort, satisfied is higher than dissatisfied,” and it’s clearly the other way around. And this is really consistent. This comes up time and again.

0:17:44.8 MB: So you wouldn’t want to rely on it for uncovering the insights. Because the…

0:17:52.7 MK: Do you know why… Like, I know that me asking why is a stupid question right now. Like we don’t get to look inside the black box, but like, that’s a really strange error, like really strange that it would be able to interpret it correctly in the graph. But then… Is it like something to do with converting it to the graph and then the graph back to the descriptive text? Or like, is that the step too far? Like I just… How do you know… You don’t know where the boundaries are.

0:18:18.5 MB: So the graph is separate from the… From what… The model doesn’t see the graph. The model runs the Python and then takes the… And turns the Python script into something that sits in the HTML in the browser window, but the actual model doesn’t see the output. Because the model has turned everything into tokens, where you’ve got a graph that has, or it’s done the… It’s used Python, it’s got some numbers attributed to the different cohorts and positive and negative, also satisfied or dissatisfied. They’re just token IDs for the model. So it’s not like… The system doesn’t see the raw number. It sees the tokenized version of the number and then has to, in its model, understand the relationship between these… This is my best guess, right? So I’m making some assumptions here. I would like to see, particularly within the chatbot version of these tools, I would like it where it creates the graph and then turns the graph into an image and feeds that back in. Because the funny thing is, if I take a screenshot of that graph and feed it back into ChatGPT and say, “Tell me what’s going on with this data,” it consistently does a very good job of that because it’s got the vision capabilities.

0:19:38.9 MK: That is nuts.

0:19:40.1 TW: Sort of like the second order of thinking is where it starts to fall apart.

0:19:43.7 MH: But what, so that’s the second… So that was like number two, I think of four, like the, like to what end it ingests it and outputs a result. And maybe that’s going to get better with the added reasoning as more models come along. Is it going to be easy for somebody to just wave their hands and say, “Oh, well, you’re the second one,” it’ll ingest it and it’ll output results. And the results will be very, very reliable. It can count the number of bars in strawberry and it will always give the right answer. So is that an easy one to kind of check off and say, “Yeah, that’ll get fixed,” or.

0:20:21.1 MB: I would expect so, but we don’t know at the moment. With o1 doesn’t have… You can’t do file uploads. You can’t upload images. It’s just text in text out. You would expect that to improve. The next method is actually the using the assistance within the spreadsheet software itself. So Microsoft Copilot by way of example. This is a really difficult one to judge because the version that I wrote the paper on was the previous version. And then literally I think that the day the publisher signed it off, they announced wave two of Copilot, which has new capabilities. So the new version, which I haven’t yet tested is supposed to be able to actually write and execute Python on new spreadsheets and do more.

0:21:13.3 MB: It can actually interact with more of the tools and the functions because the old version could do that, but would often say that it had done a task and it hadn’t done a task, or it would tell you that it couldn’t do a task because there was too much data. Whereas I think those limitations on wave two have been lessened somewhat. So that’s really, if we think about where we… What the ideal is, I think this is the ideal. You want the chatbot in the environment where you’re working with that data and it’s able to actually execute almost agentically different functions, tools, tasks, directly within the file itself.

0:21:53.5 MK: So the, okay, the lay person’s version of this, rather than going to something separate, having to kind of ingest the data, yada, yada, yada, it’s built into it. And the difference is not only can I ask as, like, work as a “helper”, or whatever smart marketing person called it. It can also actually execute functions on your behalf. So it can do the doing, not just give you steps on how to do the doing.

0:22:22.5 MB: Yeah. And the first version of Copilot in Excel was supposed to be able to do some of the doing, but it did it wrong really more often than you would ever want the tool. It felt like it was released a little bit too early, which, you know, fair enough they’re iterating on these things really quickly, but yes, it should… And I think the more important thing with that is actually it can write and execute Python within the environment, which just adds a lot more capability to Excel.

0:22:52.0 MK: I’m really curious from like a product perspective, because that, what you’re talking about here basically implies that unless you were truly embedding this technology into the product roadmap in a really meaningful way, you will probably fall behind in any kind of tech company, which I hadn’t really thought about. Yeah. Okay. I’m having lots of light bulbs. Maybe I should do more recordings in the evening.

0:23:21.0 MH: But [laughter] but so I’m trying to figure out the limits of that. And this is also realizing that, again, the slew of pitches we’re getting for guests on the show, like the term Gen BI, like, “Oh, Gen AI is going to bring Gen BI,” which I’m trying to figure out of these three categories, like, where does it go from I’m a user of Excel, which means I’m a human being on the planet. And I’ve got a tool that gives me a little bit more of a natural language interface at kind of a micro level to go bite-sized along the leap where that gets… Where I’m not sure if that’s included is, “Oh, well, you’re just going to have a natural language interface to ask how much revenue did we get by channel last month?”

0:24:12.0 MH: That feels like more dangerous territory than saying, “Hey, can you extract… ” put a filter in so it flags anything that’s within the US as US and everything that’s rest of world, rest of world, which is a more specific instruction. Is that a spectrum, or is there a hard line where you’re crossing from a Copilot to my hope for wished for natural language interface to the data that is reliable?

0:24:48.2 MB: I think that’s where Microsoft would like Copilot for Power BI to get to. I don’t have any experience with that, particularly with this new wave of updates that are coming or have recently been announced. What I can say is that people that were using… That Power BI power users that were really interested in Copilot stress-tested it at the start of the year. And they described it as, one description said, “It’s not ready for CEO-level insights and presentation of data at the moment. It’s quite simple. If there are several steps of manipulation of the data that you need to do in order to get the insight that you’re after, it falls down. It doesn’t understand at the moment relationships between different entities in your data set.”

0:25:41.0 MK: So how are you seeing companies use this, or like analysts use it in their workflow? Kind of like, I know we’ve talked a little bit about the spreadsheets, but if you take the CEO example of amazing boss lady comes to you and says, “Sales are down, what’s happening?” And you go through that analyst workflow of solving the problem. Like, do you have kind of any intuition how people are really leveraging this in their day-to-day?

0:26:09.1 MB: The file ingestion, if you can get your data sources into ChatGPT, you can get, with the right prompting, really good insights really quickly. It can bring together multiple data sets. It can merge them, and it can, if you are very good at being able to describe your data and what you’re after, it can give you those graphs and those charts. How much people are doing that day to day? I am… I don’t see that a great deal. When I speak to people the most common experience I have is people going, “It didn’t quite do it for me.” Like, “It told me something was wrong.” So there’s an element of doubt that is seeded in people’s minds. And this is the thing.

0:26:55.8 MB: I think people are so used to using a spreadsheet, a calculator, something that gives numbers in numbers out, that makes sense and is always true. Where you have a tool that you use it 10 times and 2 times you go, “That’s not right.” It plants a seed of doubt in your mind. So I think until the hallucinations issue is cracked, we’re not quite going to get there. Everything feels, particularly on the data analysis side, I would say you can get surface-level insights, or you can get visualizations created very quickly. You can do data manipulation very quickly. If you’re someone that doesn’t know R and doesn’t already know how to manipulate the data, you can do that. It gives you those additional skill sets or access to those kinds of skills in a limited capacity. But how much people are using this in the day-to-day, I would dare say it’s more as an assistant to help them shortcut some code writing functions rather than really relying on it for insight.

0:28:07.7 MH: So, what is the fourth way? I feel like I want to dive back into, and I’m not sure whether I’m hitting a gap or whether I’m hitting a… Or just know that enough of our listeners will be like, “He said four, he said four.” So, and I want to break that tension, so.

0:28:19.9 MB: Yeah. I did say four.

0:28:22.5 MH: This show much like an AI gets lost along the way.

[laughter]

0:28:29.1 MB: There is a fourth and the fourth is actually less useful for analysts in some respect, but it’s actually adding an entirely new function to the spreadsheet itself. So a good example of this is Anthropic’s Claude model has a Claude for sheets add-on. So it’s a Google sheets add-on and it creates a new function, equals Claude, and then equals Claude open bracket, and then you can put your prompt in there. And then the return of that prompt is what populates that cell. So that means that you can assemble prompts using data input from other cells. And just like you would any other formula, you can build a formula and then send that to Claude and get Claude’s response straight back into the spreadsheet.

0:29:13.9 MH: Okay. So then now I’ve got, well, so one, I think one thing, and maybe it falls in the second kind of the file ingestion. It seems like there is a lot of using generative AI for analytics and it winds up, it’s really using generative AI for analytics engineering or for data engineering or for data observability or for… So there does seem like there’s a whole class of tools that are either kind of pipeline building assistance or data monitoring, which to me, that’s not the analysis, that’s the upstream. And it seems like just my gut is there 60 or 70 percent of things that get labeled as generative AI for the analysts are really generative AI for the data engineer or the analytics engineer; would you agree with that? I mean, are you seeing those where that’s getting labeled as for the analysts, but it’s not for analysis and that’s causing maybe some confusion in the market?

0:30:25.0 MB: Yeah, I think that’s probably true. And I’m just yet to see really strong use cases. And I guess you guys are more at the coalface of this than I am. I’m yet to see really strong examples where people have said, “We use generative AI for this level of insight and analysis and look how I did it. And that was all AI, ta-da,” you know, “We sprinkled in some data and got this amazing output; isn’t it great? Aren’t your jobs all doomed?” I’m not seeing that.

0:30:57.4 MK: See, I find there’s two groups of people. There are the people that are doing very cool shit and are doing it pretty quietly and not telling people. And that kind of tends to be the way that I, I mean, I’m not saying that I’m awesome, but like when I tell people I’ve taken a shortcut, I’m not going to tell people, let them think I did all the work. Not that I transcribed a voice note and then used it to write up my interview feedback and then pasted it in, in a really efficient manner. And everyone thought that my interview feedback was spot on. But then there’s the other group that are like, “Oh my God, we did AI, look what we did. Over here, over here.” And it’s like, it just seems to be really polar opposite. I don’t feel like we’re at that maturity of educating people about how to do it well and the pros and cons. Like it seems to be, I don’t know, like very polarizing at the moment, but maybe that’s just my lived experience.

0:31:55.5 MH: Having gotten buttonholed by somebody who was definitely the latter, it was a really long and exhausting conversation. And it really wasn’t a conversation, it was just him going on, which what was interesting is that when the probing that I did do with him was all of this really, really cool stuff was around rapidly pulling in data sources and being able to use web hooks and generate code to pull data sources in. And then with some iterating on the model, do some kind of mining of these multiple data sources to generate something, which was all very interesting, except the two things. And he, clearly this fella talks about it to anybody who will listen to him and does not stop. And then he started making these bold claims about any company could go from $0 to $10 million with one person with just AI.

0:33:01.2 MH: This is amazing. But then as I was probing, one, he admitted that all the stuff he did, he did actually have to talk to the subject matter experts to even figure out what it should be doing, which seemed like very much a human task. The thing that we didn’t get into that just seemed… He went on a great length about he does not have a technical background. And, but he also went on about how he didn’t have to write any code. He would just have this generate the C-Sharp and then he’d take it. And that felt like another component of, well, that seems sort of fragile. Like you’re… The playing around I’ve done with code generation is it’ll generate something, but it may not be clean or well-written or something that you want to have code that lives on for the ongoing production of any sort of ongoing deliverable. Like it equates… I talked to my son who’s a software engineer, and get him started on somebody who’s a crappy software engineer, or sometimes a faceless person in the past where he’s inheriting the downstream. And I’m like, “Oh my God, the ability for a machine with a temperature setting probabilistic in nature to generate code.

0:34:22.2 MH: That’s then going to live, that some poor analyst or some future generative AI needs to modify the code? How is that going to work?” Like, so the ability to say, if you’re going to write something that needs to have staying power, you can use the code assistant, but you probably need to know the code and maybe do some real iteration with it, as opposed to just saying, “I don’t need code.” I mean, I’ve had multiple people saying, “No one needs to learn to code. It’ll just generate it for you.” And I’m like, “Well, that’s somebody who’s never learned to code says that.”

0:35:01.3 MK: Can I challenge you a bit here? One of the things that is a little bit exciting, like when anyone asked me, I was like, I will say, “I’m not that technical. I can do a bit of programming, but I’m pretty shit.” And I’m way more shit than I was five years ago. And I know I have people in my team, for example, that they would share that and they would say, “That’s not my strength. Programming is not my strength.” They definitely have made endeavors to learn and will try their best, but they’re never going to be a gun programmer. They’re not the one that are like QA-ing 50,000 PRS from other engineers, data scientists, every day. One of the things that I find so challenging in data is to find people that are really good at figuring out how to answer a question, solve a problem.

0:35:49.0 MK: And it’s like the idea that you could have someone who might not be strong in a particular skill, like programming, but has this real superpower to understand and answer a business question and you can make them better at their job by kind of giving them this free buddy or coach or technical mentor. Like, I find that fucking exciting. That is cool.

0:36:18.7 MH: So I think you glossed over the thing that I think just gets glossed over all the time, which is you…

0:36:27.1 MK: Oh, tell me more.

0:36:27.2 MH: And those people, well, and all those people, they tried. They’re like, it’s not saying you have to be an elite level programmer. And I think the Cassie’s article that I mentioned, the Strawberry’s Paradox, is very much on that. It doesn’t mean you have to be a hot shit programmer, but discounting the effort to learn, learning SQL and learning VLOOKUP and learning what a left join, what a join is. If you completely skip that and say, “Oh, but somebody just has a great sense of answering business questions.” One, I think that’s actually often discounting what they’ve learned and part of their ability to answer the business questions from struggling through some of the technical aspects of it.

0:37:13.4 MH: Like, learning that stuff helps you understand how data works, right? If you do a thought experiment where somebody’s never had this sense of a join introduced to them and they just say combined data sets, you wind up with kind of the very casual business user where you’re having a very circular discussion because they don’t understand that you need a key to join two data sets. So I think we’re really good at skipping that point of saying, “No, no, no, this is gonna be great.” It’s like, well, no, but the people have to learn that that’s not their interest or their passion, but they’re learning very, very valuable aspects that go into their ongoing cognition to try to learn that technical stuff. That is part of who we are. And we’ve started saying, “Oh, we can skip that. You don’t need to do it at all.”

0:38:06.9 MK: But who’s saying that? Like I know that there is the odd person, but I would… Like, if anyone came to me and said, “I wanna be a data analyst,” and they’re like, “Guess what? AI’s out there, I don’t need to learn any programming.” I tell them to…

0:38:22.3 MH: That’s…

0:38:23.2 MK: I’d be like…

0:38:25.1 MH: A thousand percent what the fucking analytics translators were saying. I definitely dealt with…

0:38:29.3 MK: No, not at all.

0:38:30.8 MH: No. I’ve had people tell me… I had somebody who was a long time Google person adamantly tell me, “No one ever needs to learn code again.” And I was like… He was like, it’s just… He’s like, “No, you don’t need to ever do code.” And I’m like, “I can’t believe.” So, absolutely. And I will say going back pre-AI, there were people who were coming who were enamored with the idea of analysis and the idea of doing stuff with data, but said, “Ooh, I don’t wanna learn anything technical and this analytics translator role, I can just… ” And this was way before Gen AI. I’m not actually denigrating the analytics translator role, only if somebody thinks that means, “I don’t have to have any technical chops.” But I don’t know, Martin…

0:39:19.3 MK: ’cause for example, I know analytics translator is a very contentious thing, but when I think of it, I think of something very different to what you think of. And like, and this is the same situation…

0:39:32.8 MH: I’m looking at the people I know who have jumped on that role. Yeah, sorry.

0:39:36.8 MK: There is a spectrum though, at one end there is like, “I think I can do no programming and AI’s gonna do it all for me.” And then there’s like the middle people that I kind of talked about who might be like not great at it, bit rusty can use it. And then there’s the people that are like, “Why would I ever need AI? I’m such a great programmer.” But it’s always this spectrum.

0:39:54.9 MH: Hearing your description that you gave many people would jump to saying, “Moe thinks that if somebody looks at programming and is not interested in it, they can therefore completely ignore it. They’re… ”

0:40:08.7 MK: No.

0:40:09.1 MH: I just… But I think that that’s how that can be heard.

0:40:16.6 MK: But Martin’s talking a lot…

0:40:16.9 TW: Hold on. Hold on. Let’s…

0:40:17.6 MK: Wait, no, Martin is talking a lot about the fact that there are so many mistakes made. How do you recognize a mistake if you don’t know what the wrong or right output is?

0:40:27.1 TW: Right. And that’s the thing is…

0:40:29.6 MK: And I know I’m using wrong and right in a very binary sense, but…

0:40:33.5 TW: The people who are super excited about what this is gonna do, have probably never done it. And that’s what we’re probably kind of circling around right now. Okay. Let’s put up…

0:40:44.2 MH: No, I wanna say more about that.

0:40:46.5 TW: Circle back. Let’s bring it back. ’cause I think the place I want to go next is I want to talk, Martin, a little bit about sort of this idea, and this goes into a couple things. So one is sort of like Moe to your point, people who are using AI for various things aren’t really necessarily talking about it. And I think sometimes because there’s not a scalable process for the way that I might use an AI, I use it kind of in that first use case as sort of this sort of assistant coach-mentor thing. I’ll just pop open my little ChatGPT and be like, “Hey, I’m thinking about this. What are some ideas you’ve got?” And blah, blah, blah. I’ve never had ChatGPT look at data for me ever. I’ve had Claude look at a couple things, but I’ve never used them to do any kind of analysis of data. But I think this idea of exploring sort of the agentic process in analytics and sort of like, let’s step through some analysis scenarios and maybe look and see where we could leverage it. And Martin, where do you see kind of the best places for analysts to use AI in their day-to-day jobs? And that could be… We can give you some scenarios maybe to help with that.

0:42:00.2 MB: Tim, you look like you’re about to…

0:42:02.7 TW: No.

0:42:02.8 MB: No. Okay.

0:42:02.7 TW: No. I think I’m still… I’m waiting for my Generative AI to tell me that my blood pressure’s come down enough from my last rant to…

[laughter]

0:42:14.1 MB: Yeah, so give me the scenarios.

0:42:17.6 MH: Okay. Perfect. I’ll start with one. So, one that I think about all the time is a lot of what we do in analytics is really thinking through sort of basically an experiment of some kind, or some kind of analysis around this versus this. Like, “We’re gonna try this.” So, one of the really crucial skills for an analyst, I would say is being able to design a good experiment or think through the design of a good experiment. And so like, let’s say somebody comes to you on your team and is like, “Hey, we wanna run this campaign. We wanna see if this is a better way to do this.” Could you use AI to start to work through the answer to that question?

0:42:58.0 MB: I think that’s a… Like, the kind of design of experiments is really interesting, particularly with the new model, o1, with the reasoning capabilities. So it’s the chain of thought capabilities mean that it thinks, “thinks” he says in their quotes, through the process. And it can be a very good constructive critic. So, giving you feedback, giving you alternatives to the point that we made earlier about being generative. And it can come up with lots of things, very quickly. It can generate huge amounts of content, some good, some bad [laughter] So, if you want to just throw in an experiment design, or a hypothesis, or whatever it may be, and ask it to give you feedback and then just keep going. More feedback, more feedback, it will generate lots of it. Some of it you would disregard, but hidden amongst that there will be some gems.

0:44:00.8 MB: Now all of this is talking about the current state of these models. I think it’s not going to be long before, and actually I’m quite interested in o1 and where this goes with the reasoning capabilities. I think you’ll just be able to put in very simple prompts saying what it is that you are looking to achieve and it will spit out very high quality experiments that you can execute.

0:44:26.6 MH: Yeah, I asked the o1 model, how many golf balls will fit inside of 747? It did a pretty good breakdown, honestly [laughter] So, those kinds of reasoning problems. I think it does a good job with, I think Moe brought up something else about sort of, there’s a value in being able to take on and answer a question or understand and answer a business question effectively. And how could an analyst leverage AI to maybe even work with that kind of use case?

0:44:56.4 MB: Moe, can you unpack that for me slightly?

[laughter]

0:45:03.4 TW: You’re saying if somebody comes with a problem coming back and saying these are scenarios for…

0:45:11.1 MH: Well, like…

0:45:12.3 TW: Analysis approaches that…

0:45:12.4 MH: I don’t know.

0:45:11.9 MK: Actually this happened the other day. No shit [laughter] I said I would not talk about this, I said I would not talk about this and here I am talking about it. There was a CMO-type question, and someone put it into ChatGPT to say what are the possible hypotheses that might be an answer to this question. I was a little bit surprised at how good the answer was. And the reason that the answer actually was very good, and I found this with my own experimentation, is I find a lot of the responses I get come down to structuring things very logically. And so it’ll be like reason one, reason two, reason three, reason four, which as someone who ends up writing things into a lot of documents or like writeups, it then becomes a very easy structure to work with in terms of writing it up.

0:46:12.0 MK: And so I was like, “You know what? We are gonna just lean hard into this. We are gonna then tackle this,” Tim, you’ll love this, “Almost as analysis of competing hypotheses and be like, ‘Okay, these are the nine hypotheses that ChatGPT gave us. Let’s go through. Let’s try and knock them out. Let’s see what we can’t, what we have evidence against. What can we say is possibly responsible, partially responsible? What are the data that we have for each one?’” And that’s actually ended up how we ended up structuring our analysis, was based off the hypothesis generated from ChatGPT. There you have it folks. I said I wouldn’t say it and I did.

0:46:48.9 TW: But one, that’s back to that number one, I think the, in Martin’s list of four, like the coaching or mentoring to me. And I feel like that’s…

0:46:58.6 MK: Is it coaching and mentoring? I don’t know if that’s the same.

0:47:03.6 MB: Or it’s the same, whatever.

0:47:04.7 TW: To me that’s what… I mean, that is what Jim Stern has been kind of… I’m now multiple times seeing him due to various iterations where he’s saying, “Ask it for ideas.” And Michael, that’s what you were saying. I’ve used it for that. That…

0:47:21.7 MK: Okay. I… Sorry, Tim, I apologize profusely for interrupting, but I can’t stop my brain from thinking right now. I think of coaching and mentoring as helping you make something you’ve already got better, or get there faster. So for example, it might be like using a different function. It might be QA-ing the work or, you know, making the language more concise. Whereas I think ideation is almost its own separate category, which is distinct to coaching or mentor. Like, I don’t know, but that’s, maybe I’m being too…

0:47:55.2 TW: I don’t know, Martin, I mean, how would you define [laughter] it was … the four.

[laughter]

0:48:00.4 MB: Yeah. I thought of, when I thought about coaching and mentoring, helping you to ask better questions or think about things in different ways was part of that. So, I did see that kind of ideation of things being part of that kind of umbrella.

0:48:18.6 TW: But would you also, I mean, Moe with your, the, “Okay, these are nine, maybe two,” you could say, “These are garbage. I didn’t… ” one, iterating…. There were some that, but then also how would I actually validate that, right? Because there’s multiple ways to validate. I mean, you could take it farther and say, what data would I look at or to get a causal relationship to truly, if this, if my life depended on validating this hypothesis, number three in your list, what would you recommend that I do?

0:48:52.4 TW: Assume infinite resources. You know, I think, which all to me goes through a good iteration. But it’s interesting you asked it for, like, what are some hypotheses not what are the insights? What are the answers, right? You had it be that upstream piece and then, “Okay, we’re gonna put a human in the loop, who’s gonna say which of these are worth pursuing and how,” and hopefully someone was looking at it saying, “Some of these we just factually know there is no data that can validate that hypothesis already in existence. The only way I could do that is to generate some new data that the Generative AI doesn’t have access to. Because I need to run an experiment,” or, “I need to gather some data for my users,” or somewhere else. So, it’s in the process, but it’s not… I still feel like it gets treated as like, “Oh, oh, it’s this close, as it gets better, it’ll generate those nine things and they’ll be CMO-ready.” And it’s like, “No, it’s gonna generate those things and then we need humans and work in the process.”

0:50:00.2 TW: And I don’t wanna come like, hopefully I’m not coming across as anti-Generative AI. I just think there needs to be…

0:50:03.0 MH: Oh you are, Tim.

0:50:04.1 TW: Decision [laughter] Oh, I…

0:50:06.6 MH: You are. No, I’m just kidding.

0:50:06.9 TW: We’re gonna run the transcript of this through Claude and say…

0:50:13.0 MH: That’s right.

0:50:13.1 TW: Who’s the asshole?

0:50:13.8 MK: Yeah. It’s actually really interesting. Like Martin started this whole episode talking about the terrifying scenario we wouldn’t have jobs. And it’s funny, I am also using ChatGPT a lot at the moment for testing different ways to explain a technical concept to stakeholders. So, the other day I needed to describe probabilistic and deterministic-probabilistic, you can tell it’s … probabilistic and deterministic. And I was trying to test out, I actually had a few different models going against each other to figure out what was the best option. But it still comes back to that human component of me looking, knowing my stakeholders well enough, having a good understanding of what concepts they’re familiar with, or what terminology has stuck with them. So that will land. And then sometimes using different bits from different outputs to stitch it together. And yeah, I don’t know. I’m sure maybe when my kids grow up, maybe that step won’t exist, but for now, I definitely feel like I still need that.

0:51:13.9 MB: Well, you keep discounting that. Like, you keep discounting that like, “Oh, well maybe it’ll get to where it’s better.”

0:51:20.4 MK: It might. I’m not the future reader.

0:51:24.8 MB: Well, but I mean this is… This goes back, it’s not new that 10 years ago they were saying, “We’ll get to… I don’t need to learn R don’t need to learn Python. I don’t need to learn SQL, because the computer will just do it for me.” And it’s like the half life of getting partway there, it does something better, and then we have this world of optimism that says, “Oh well this other part that it can’t do now I’m sure it will get there. If I just wait, it will get there.” Like, I feel like there is a tendency to say, “I don’t need to become better at communicating, because I’m sure within six months it’ll just generate… Canva will introduce the next feature that it just says, ‘Here’s the data set, generate the slide deck,’” and then you spiral into, “I’m gonna lose my job.” As opposed to saying no… Like, knowing who the people are that you’re working with, that matters, which of these analogies would work better. What’s the fine-tuned right level? And it’s not that it’s not gonna continue to get better. I mean I’m terrible as a futurist, but I think that it’s like saying, “Oh, well maybe it’ll just do this for me within a few years, I feel like is… ” Yeah.

0:52:45.1 MK: Okay. Number one, I don’t think I’m discounting that stuff, but I just maybe don’t get quite as passionate about it. So, given Tim’s rant though about, you know, we all still need to learn programming skills, we all still need impeccable communication skills. Computers won’t solve the day.

0:53:00.1 MB: I did not say that.

0:53:01.9 MK: Okay. Now it’s just fun.

0:53:04.0 TW: Paraphrasing.

0:53:04.0 MK: Come on, come on.

0:53:06.7 MB: No, I’m… The thing is you’re putting it is… I mean.

0:53:06.9 MH: We’re gonna use ChatGPT to paraphrase what Tim said.

0:53:13.2 MB: You can’t. What I’m saying there’s value in this and then you put a label that Tim says you need to be perfect at this, that…

0:53:22.2 MK: Oh, come on.

0:53:23.3 MB: That is fucking annoying. Right?

0:53:23.4 MK: Okay. Sorry.

0:53:23.9 MB: I mean it’s not… You’re painting it.

0:53:27.1 MK: I take it back. I take it back.

0:53:27.4 TW: Can AI do this folks. I don’t think so.

0:53:31.9 MK: Okay. Tim and I are gonna be banned from being on a show together for a while. But what I was gonna say, Martin, is with the companies that you’re working with and the use cases that you are seeing, if you are starting out in the data space, you have finite time. You do have to choose where to spend your energy and your learning. I guess you probably have quite a good intuition of the direction of the industry and where it’s going. Where would you spend your energy, ’cause we always get this, we’re like, “What is the programming language I should learn? How much time should I spend on learning data visualization, or on communicating results? Or writing up analysis?” It’s like there are so many things to learn where to focus and knowing, I suppose, the pros and cons of AI, like, where would you spend your energy if you were new in the data space?

0:54:18.7 MB: So, full disclosure, I am not an analyst. So, giving career advice to future analysts is [laughter] You know, I’m not the most qualified there. But I think the fundamentals are always.

0:54:28.8 MH: Or maybe maybe you’re the most qualified. So [laughter]

0:54:31.0 MK: Yeah.

0:54:35.2 MB: There’s… As I mentioned earlier, being an expert in the field helps you get more quality content or quality outputs from the AI, you know the questions to ask to steer it. I also think from an AI perspective and from the Gen AI space, I just think there’s a really fundamental play with the tools, play with it, poke it, prod it, pull it to bits, and really look at the outputs that you’re getting to understand where those limits are within these tools. It’s very easy to just take it at face value. It’s an AI, surely it’s a computer. It’s told me the answer. And as I’ve mentioned earlier, this is clearly not true. We can fall asleep at the wheel if we just take the outputs at face value. So yes, from a data end, I would pursue the career or pursue the skillset completely ignoring that AI exists. And I would treat learning AI as a separate endeavor in and of itself, to understand what that is and what it, more importantly, isn’t at this moment in time.

0:55:44.6 MH: Yeah. That’s good. All right, we’ve gotta wrap up. This is interesting. I didn’t think we had any passion for this topic at all, but apparently we have quite a bit, so this is awesome. Well, one thing we like to do is go around the horn, share our last call, something that might be of interest to our audience. Martin, you’re our guest. Do you have a last call you’d like to share?

0:56:04.3 MB: Yeah. So there’s a Machine Learning Street Talk, a podcast about machine learning. They recently did an episode on: Is o1-preview reasoning? So the new OpenAI model is it actually reasoning? And it’s about an hour-and-a-half discussion going deep dive, quite philosophical in nature about what is reasoning? What is knowledge? Are the things that these language models doing, truly reasoning? And it’s really fascinating for anyone that’s interested in learning more about that.

0:56:35.3 MH: Nice. Awesome. Thank you.

0:56:37.2 TW: This is funny, Cassie, that same post by Cassie talks about how, I can’t remember which model that says thinking, and she was like, “It says thinking,” she’s like, “It’s not thinking, it’s kind of poking a little bit of fun at the human when it’s spinning around.” I was like, “Oh, I never thought about that.”

0:56:56.1 MH: Appear to be human. All right. Moe, what about you? What’s your last call?

0:57:02.4 MK: Okay. Mine’s a weird one. So, I am talking about something that has nothing to do with Gen AI. I’m doing a professional leadership course internally. I’m very lucky we have internal coaches at Canva that we get the opportunity to do this. And the topic we covered last week was about kind of like our leadership values and our, what’s called our leadership shadow, and I had written my leadership values a few years ago. I’d run it through some mentors and people that I’d chat with, and I was pretty happy with them. And of course I dusted them off the shelf and I looked at them and was like, “Yeah, shit.” I think what really stood out to me is that at the time I wrote them, they were all very aspirational and I would say very soft-skill-based. And I didn’t feel that I had something there that captured the team’s output or drive.

0:58:00.4 MK: And I realized over the last few years that is something that’s really important to me. So number one, this is a reminder if you do have leadership … go check on them. But the other thing that happened is we started talking about our leadership shadow. And so that’s where you say something’s important to you, but you maybe the way you behave doesn’t show up in the same way. And so an example, not a reflection of me at all, is you say that your team are the most important thing. You really care about everyone that you manage, but then you move your one-to-ones regularly or you reschedule the team meeting every month, or something like that. And so it’s about identifying where are you saying things are important, but your behavior is actually quite different if you were in the team and seeing that. And yeah, it was just like kind of a nice, I mean challenging exercise, but good exercise to see how you then overlay that with the values that really are true to you and how you’re gonna show up and make sure that you’re demonstrating that to the team.

0:58:57.4 MH: You should throw those into ChatGPT and say, “What is my leadership shadow?”

0:59:00.0 MK: I don’t think it knows me well enough yet.

0:59:02.9 TW: Yeah.

0:59:03.0 MH: But I bet it will, give it a couple weeks.

[overlapping conversation]

0:59:08.2 MK: I feel like maybe I should let Tim be in charge of the prompting and then maybe we would get some real gold there. [laughter]

0:59:13.5 MH: All right. Well Tim, what’s your last call?

0:59:16.9 TW: So, I’m gonna do a plug. We are just a little less than a year out from the… I’m gonna do a twofer. My first one’s a plug for the data connect conference, so it’s in early October of 2025. So, I’ve talked about it before. We’ve done promos for it. It’s dataconnectconf.com. But the call for speakers is already open. So if you are, or if you know someone who is a woman or a gender queer, gender non-conforming or non-binary individual who would have something to speak about at a data conference, consider putting in a pitch for that. It’s a great conference open to all to attend, just limited on who the speakers are. So that’s my plug for that conference and getting great content there. And then as my actual last call, which maybe does tie into this topic, there’s a guy named Peder Isager? Isager? I don’t know how to pronounce his last name, wrote a post called Eight Basic Rules for Causal Inference.

1:00:25.6 TW: What’s funny is the URL actually is seven basic rules for causal inference. So, I am really curious as to which one he thought he had not realized, but it gives simple little diagrams that actually made me… The first couple I’m like, “Yeah, knew that, knew that.” And then it got really interesting. So, when it comes to this topic we had today, I think causality is one of those things that is really kind of profound and tricky. And that was kind of a nice post with simple little diagrams that kind of make you think, “Oh, this is why all the answers are not just in the data that I’ve already collected.” So, eight basic rules for causal inference. Michael, what’s your last call?

1:01:08.8 MH: Well, in the spirit of this topic, a couple of people that I know very well ’cause they used… I hired them both and they used to work for me. They’ve started a startup in the AI space called Moonbird, moonbird.ai. And they are building agentic tools and services and things like that. But their first product is around an agent or something… An AI agent for specifically looking at Adobe Analytics implementation. So, if I were walking into a situation where I was looking at an Adobe implementation today, I would be using that tool to bring me up to speed, give me information, provide me some knowledge.

1:01:46.0 MH: So, if you’re in that space, it’s a great little tool for that. So big shout out to the Moonbird team over there. All right, well Martin, thank you so much for coming on the podcast. Who knew that little networking session, at Marketing Analytics Summit would eventually lead to this? [laughter] Martin and I were at a table together at Marketing Analytics Summit. We got to introduce ourselves and here we are. So thank you, Martin.

1:02:13.1 MB: Thank you. Yeah, it was some good dim sum we had. [laughter]

1:02:17.3 MH: Yeah. That’s right. All right. And then of course no show would be complete without a huge shout out to Josh Crowhurst, our producer, does so much behind the scenes to make things happen. Josh, thank you. And of course, big shout out. Thank you to Tim and Moe, my co-hosts, for bringing so much life and passion to this episode.

1:02:36.2 MK: Arguing. You mean arguing.

1:02:38.1 MH: Yeah. Well, you know, I asked ChatGPT like, “Give me a positive spin on all this bullshit.” [laughter]

1:02:47.8 MH: All right. Well this is an awesome topic and obviously what I think is super interesting and obviously growing and becoming more and more a part of the conversation. So I think this is probably not the first time or the last time we’ll talk about it on this podcast, but I like the start we got today. So again, thank you Martin. And again, I think as you’re going out there, we’d love to hear from you. What are you using AI for? What kinds of things do you see in your work? It’s easy to reach out to us. You can get a hold of us on the measure chat group or on LinkedIn. And we also now have a YouTube channel, so you can check us out there as well.

1:03:21.5 MH: So, go ahead and reach out to us. We’d love to hear from you, unless you’re pitching an AI-related topic or host from PR auto-bot-type situation. We do get a lot of those emails, but we’ll do the picking. Thank you very much and I think we got the great person for this today. All right, anyways, I know that as you’re going through life, you’re gonna be using AI more and more and so keep the good work going. And I know I speak for both of my co-hosts, Tim and Moe when I say, keep analyzing.

1:04:00.7 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions, and questions on Twitter at @analyticshour, on the web at analyticshour.io, our LinkedIn group and the Measure Chat Slack group. Music for the podcast by Josh Crowhurst.

[background conversation]

The post #257: Analyst Use Cases for Generative AI appeared first on The Analytics Power Hour: Data and Analytics Podcast.

  continue reading

13 에피소드

Artwork
icon공유
 
Manage episode 447484101 series 2448803
Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer, Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer, Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Judging by the number of inbound pitches we get from PR firms, AI is absolutely going to replace most of the work of the analyst some time in the next few weeks. It’s just a matter of time until some startup gets enough market traction to make that happen (business tip: niche podcasts are likely not a productive path to market dominance, no matter what Claude from Marketing says). We’re skeptical. But that doesn’t mean we don’t think there are a lot of useful applications of generative AI for the analyst. We do! As Moe posited in this episode, one useful analogy is that thinking of using generative AI effectively is like getting a marketer effectively using MMM when they’ve been living in an MTA world (it’s more nuanced and complicated). Our guest (NOT from a PR firm solicitation!), Martin Broadhurst, agreed: it’s dicey to fully embrace generative AI without some understanding of what it’s actually doing. Things got a little spicy, but no humans or AI were harmed in the making of the episode.

Links to Resources Mentioned in the Show

Photo by Barbara Zandoval on Unsplash

Episode Transcript

[music]

0:00:05.8 Announcer: Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language.

0:00:14.8 Michael Helbling: Hi everybody, welcome. It’s the Analytics Power Hour. This is episode 257. You know, since the Industrial Revolution, it seems like the interest in automation is always around. And in the analytics space, there’s always a lot of interest here as well. You know, that entails handing off parts of the work to a machine, to increase efficiency. These days, AI is the newest entrant into this discussion. How and what can we hand off to an AI when it comes to analytics? Are they gonna take our jobs? Will it truly usher in an era of data democratization? I don’t know. I guess we should talk about it. And to do that, let me introduce my co-hosts, Moe Kiss, how are you going?

0:01:00.4 Moe Kiss: I’m going great, thanks for having me, Michael.

0:01:00.9 MH: It’s awesome. And Tim Wilson, some would say you’re already a computer already. Your results are too perfect. Now, how you doing, Tim?

[laughter]

0:01:10.9 Tim Wilson: Ouch. I’m getting to where I’m a computer when it comes to responding to a podcast pitch about… Pitches about generative AI for analytics.

0:01:21.1 MH: There you go. That’s a part of the job that’s…

[overlapping conversation]

0:01:25.6 TW: They’re flowing in fast and furious and…

0:01:26.0 MH: Fairly automated.

0:01:29.3 TW: Reached out to Martin ’cause we’re like, “How about we go with someone who we reached out to instead of somebody who came in to us?”

0:01:34.9 MH: Yeah, a lot of interest in this, and I’m Michael Helbling and we did wanna bring on a guest who is at the forefront of this issue and luckily at Marketing Analytics Summit this year we met Martin Broadhurst. He’s a consultant on AI for marketing, the owner of Broadhurst Digital, and he serves on the editorial board of the Journal of Applied Marketing Analytics, and today he is our guest. Welcome to the show Martin.

0:01:56.8 Martin Broadhurst: Hello Michael. Hello Moe. Hello Tim. [laughter]

0:02:00.2 MH: All right. Well, we’ve got a lot of questions. So buckle up and in the next hour or so, hopefully we’ll learn a lot about what AI can do for us in analytics, or what it can’t.

0:02:10.3 MK: I’m not gonna lie. I’m like weirdly scared of this episode and it has been on my mind a lot.

0:02:15.2 TW: What? Why?

0:02:16.5 MH: All right. Well, let’s dig into that. Maybe this is just a… Martin what we need is a reassurance for all of us that we’ll still have jobs after this or something [laughter]

0:02:26.1 MB: I don’t see anybody’s job going anywhere in a hurry, not to spoil what’s to come. But yeah, I think you’re okay for the time being.

0:02:35.9 MH: Yeah. Well, maybe Martin, to kick this whole thing off, we can talk a little just about how you got into this area in the first place and sort of some of the things you’re seeing in the industry right now.

0:02:45.6 MB: Yeah. So my background is in the CRM and marketing automation space. This is where I’ve been working with businesses for years now. And when OpenAI made the GPT-3 API available. I immediately started playing around with it and experimenting with the different tools, seeing what the capabilities were and understanding the mechanisms of how these large language models actually worked to try to kind of push them to the limits. And over time, I’ve just built up a lot more experience with that. And yeah, this has turned into a nice general addition to my skill set where I’m working with clients on how to automate and find use cases for AI and generative AI in their workflows and in their day-to-day tasks.

0:03:36.4 MB: And unsurprisingly, data analysis is something that comes up quite a bit. So, yeah, I’ve been trying to test the models as much as I can to see where the limits are before they break. And this month I’ve just published an article in… A journal about how to use large language models with spreadsheets with a bunch of different techniques for how to think about using generative AI alongside spreadsheet and spreadsheet design.

0:04:09.6 TW: I mean, it’s a short article. Don’t you just say, “Here’s the spreadsheet and find me insights,” and then it just goes from there? I mean…

0:04:17.6 MB: That is the dream, isn’t it? Wouldn’t it be great if that actually worked like the marketing spiel?

[laughter]

0:04:26.6 MK: Well, okay, the fear, the fear I have at the moment is actually not about losing my job because I see the amazing efficiencies I even already have in my own job. What is terrifying me at the moment is the, “We wanna do AI.” We had a conversation the other day, “We wanna do GenAI on this thing.” And I get really… Let’s just say anxious. Let’s call a spade a spade. We’re kind of swapping the way we would normally solve a problem from what is the problem? What are all the ways to solve it? What is potentially the simplest, most explainable, whatever way to get there? Versus going, “We’re going to solve this problem with X. How do we do more of X?” And that’s the bit that’s stressing me out.

0:05:15.3 MB: Yeah. Finding the… Or prescribing AI first before you’ve even dug into the potential solutions, starting with that and saying, “We… ” And that’s actually one of the things that clients will sometimes say to me, they’ll just come to me and say, “We want to use AI.” It’s like, “Well, why would you start with that as the solution before you’ve looked at the implementation?” And I think this is a really common problem. I would always start with, look at those tasks that you do that have things that require certain amounts of batch work, where there’s just repetitive nature to the tasks and you can automate that away. But yeah, really understanding the nature of the problem is probably the starting point before you even get into what the solve is.

0:06:04.8 MH: But is there… Generative AI seems, it’s tangible ’cause it’s so easy for somebody to play with it, which you, that I would say there’s a higher bar for someone to just like dabble with SQL or Python or are coming from scratch. So it’s broadened the audience of people who can get a taste of what the technology is. And to me where the massive miss is just because you get a sense of what it does, you have a back and forth with ChatGPT, it kind of misses what analysis is. And like, it feels like there’s an oversimplification of the steps of saying, “Oh, well, no, no, just gonna be AI is gonna be the drudgery of the tasks.” And you say, “Well, the drudgery of my analysis work is doing this data cleanup. And I’ve played with this ChatGPT. So what if I just told it to do that.” But it kind of misses what the, even in the drudgery of the work, what the human component is, much less just the reality of identifying a problem where you’re trying to use data to solve it, like it just…

0:07:23.0 MH: It feels like it’s this big bucket of like a tool and somehow people are like, “Oh, well, the tool must be smart enough to get to how to fix it.” The fact that you said you’ve got the spreadsheets thing feels like even that is nuance because you have to kind of help it understand what a spreadsheet is, which maybe it kind of knows and then sort of what the data within it represents, right?

0:07:47.2 MB: I think the, what you’re kind of driving at this is that people don’t understand the tool and the nature of the tool and the kind of mechanism behind the tool. I think it’s really important that with generative AI people understand things like next token prediction. What does that mean? What is it doing under the hood? And when you’ve played with the models a bit and you understand some of the settings, things like temperature, for instance. So for anyone that isn’t aware of the temperature setting in a large language model, there is a setting between zero and two and the higher it is, the more chaotic the answers you are… And the basic principle of temperature is that it’s like in the physics systems, the higher the energy in a system, the more chaotic, and the lower the temperature the more controlled it is.

0:08:40.1 MB: If you play around with that in the API, for instance, you can get really consistent answers. But where you use something like ChatGPT, you don’t have access to that particular setting. So it’s generative, right? It’s not descriptive or calculating. It’s coming up with a range of answers and the subtleties in the way that you write, the way that you input, the way that the data might be structured, whatever it may be. And if people think it’s like computer software that they’ve always used in the past where you press a button and it always gives you this thing, it does this job consistently in the same way every time, they will be sadly mistaken because that’s not what’s going on under the hood.

[music]

0:09:27.0 MH: It’s time to step away from the show for a quick word about Piwik PRO. Tim, tell us about it.

0:09:32.9 MK: Well, Piwik PRO has really exploded in popularity and keeps adding new functionality.

0:09:38.6 MH: They sure have. They’ve got an easy-to-use interface, a full set of features with capabilities like custom reports, enhanced e-commerce tracking, and a customer data platform.

0:09:50.4 MK: We love running Piwik PRO’s free plan on the podcast website, but they also have a paid plan that adds scale and some additional features.

0:09:56.9 MH: Yeah, head over to PIWIK.pro and check them out for yourself. You can get started with their free plan. That’s PIWIK.pro. And now let’s get back to the show.

0:10:10.3 MK: Oh. I’ve just had this, I don’t know if this analogy makes sense Tim but hear me out, I constantly am doing this thing in my head where I’m trying to understand stakeholders’ perspectives and understanding an MMM, and how it’s different from their worldview of attribution and what attribution gave, which when you see a table and it goes: This channel this many sign ups, this channel this… Like the concept of MMM results is quite difficult. We start talking about diminishing return curves. We start talking about return on ad spend at different spend levels. And like, there’s just all this like complexity there. And I feel like a similar analogy could be made here, right? Like, you expect input output, but there’s actually so much nuance. Like, is that a… Is that like… I don’t know if I’m grasping at straws here, but in my mind, I was like, this would be the problem of people trying to take data analysis using GenAI without understanding it well enough. That would get you into the danger territory, right?

[overlapping conversation]

0:11:23.8 MB: I think that works on both… At two levels. There’s the not-understanding-the-GenAI-mechanism well enough, so not really understanding the strengths and the weaknesses of the tool. Which is going to be a hindrance in and of itself. But then there’s also that level of… People often say that if you use a large language model and you are an expert, you can get expert-level outputs from it. The better the quality of your input, the better the quality of the outputs. But if I, as someone who isn’t a seasoned data analyst, throws in a spreadsheet and says, “Give me some insight into this.” I’m asking bad questions and I’m getting very average outputs.

0:12:06.3 MB: So it works on both ends of the spectrum. If you’re not giving good context and good prompts, you’re going to get bad outputs. But also if you don’t understand the limits of the technology itself, you might just… Well you don’t know that it can’t actually do the thing you’re asking it to do.

0:12:22.5 MH: That’s… Cassie Kozyrkov last month wrote a post that was very timely as we were prepping for this episode. It was ‘Strawberry’s Paradox: When Perfect Answers Aren’t Enough’. And she sat with some Nobel Prize winner who she worked with and she was working on her PhD. And they have like just a rift for a conversation for a while. But what she… I thought very, very well articulated, put that in that said, “Imagine the AI that can give the perfect answer, that it is perfectly accurate and correct if you,” just as you said, Martin, “If you don’t ask it a good question, it’s,” you know, it is going to be like, “What’s the answer to life, the universe, and everything?” “It’s 42.” Right? It’s not a good question. And that’s this other piece that has kind of bothered me that it feels like we’re looking… The people who are looking at AI all have lived in a world without generative AI. So we’re bringing our human experience, having worked with data, having dealt with the business problems, having grappled with trying to explain multi-touch attribution versus MMM.

0:13:35.0 MH: And that’s the lens we’re looking through it at and saying, “Oh, here’s the future. It’s gonna take everything.” Well, if you fast forward and say, “Wait, that’s discounting the expert level of the input.” So even if that worked for a very, very short, for a period of time, that would start to go away because all of a sudden you’d have people who were trying to skip a bunch of steps of the human existence to get to the AI and hoping that the AI can close that gap, which seems very… I don’t know if that’s just like philosophical or it seems like, “No, that’s what would happen. It’s we’re counting on the tool to close a gap that doesn’t seem like the tool is ever gonna be equipped to fully close.” I don’t know if that made any sense. Moe, I’d really like your analogy. Or we can just cut this whole section out and… [laughter] I mean, where do you, with a spreadsheet, where are you using… What’s the start and end point of generative AI when given a spreadsheet?

0:14:49.5 MB: So I think some context has to be given there in that these models are changing rapidly. It was only a few weeks ago that we had GPT or ChatGPT o1 preview released, which is supposed to be, you know, much better at reasoning, although that’s its own conversation in and of itself. The models’ capabilities are changing all of the time. So in… What I propose is that there are, as it stands, four ways that you can really use ChatGPT or any large language model with a spreadsheet. And one is to… And my preferred route is to just use it as a coach or a mentor. It’s that very clever assistant that you’re not actually giving access to the data, but you are… You get stuck on something. Maybe you need a bit of code writing that you can stick in a macro, or you’ve forgotten the function to do a certain thing, or you’ve got a really long formula that you need optimizing and reducing.

0:15:45.0 MB: It will do all of that for you. And it’s the actual spreadsheet and the language model don’t interact. This is where AI is very strong at the moment. It can be quite good for that. There’s over 500 functions in Excel. Trying to keep all of those in your head is very difficult. Whereas if you’ve got that very smart assistant next to you, it can go, “Oh yeah, I know exactly what that is.”

0:16:11.3 MB: Then you’ve got the file ingestion. This is where you can give the spreadsheet to the model. So you can upload to ChatGPT the CSV, the Excel file, whatever it may be, and it can use Python in its code environment to execute tasks and functions on the data. The outputs from this can be very good. It can do some incredibly powerful things, but there comes a big flashing light warning sign saying the outputs can also be complete hallucinations. I have got lots of examples. In fact, nearly every single time I do this, the data that it presents back has some errors in it that if you’re not paying attention, you would not spot.

0:16:54.9 MB: So, case in point, from the Marketing Analytics Summit, I showed an example where we had a bar chart showing cohorts grouped by age. And there were two bars, satisfied or unsatisfied. And it was just, which one was higher? A blue and an orange bar. And it… In the written text, so in the charts that it creates, they are accurate. It seems that the data manipulation in the charts that it creates are accurate. But then its description of the charts, its written description of it is wrong. Like consistently, it would say, “You can see that for the 35 to 50 year-old cohort, satisfied is higher than dissatisfied,” and it’s clearly the other way around. And this is really consistent. This comes up time and again.

0:17:44.8 MB: So you wouldn’t want to rely on it for uncovering the insights. Because the…

0:17:52.7 MK: Do you know why… Like, I know that me asking why is a stupid question right now. Like we don’t get to look inside the black box, but like, that’s a really strange error, like really strange that it would be able to interpret it correctly in the graph. But then… Is it like something to do with converting it to the graph and then the graph back to the descriptive text? Or like, is that the step too far? Like I just… How do you know… You don’t know where the boundaries are.

0:18:18.5 MB: So the graph is separate from the… From what… The model doesn’t see the graph. The model runs the Python and then takes the… And turns the Python script into something that sits in the HTML in the browser window, but the actual model doesn’t see the output. Because the model has turned everything into tokens, where you’ve got a graph that has, or it’s done the… It’s used Python, it’s got some numbers attributed to the different cohorts and positive and negative, also satisfied or dissatisfied. They’re just token IDs for the model. So it’s not like… The system doesn’t see the raw number. It sees the tokenized version of the number and then has to, in its model, understand the relationship between these… This is my best guess, right? So I’m making some assumptions here. I would like to see, particularly within the chatbot version of these tools, I would like it where it creates the graph and then turns the graph into an image and feeds that back in. Because the funny thing is, if I take a screenshot of that graph and feed it back into ChatGPT and say, “Tell me what’s going on with this data,” it consistently does a very good job of that because it’s got the vision capabilities.

0:19:38.9 MK: That is nuts.

0:19:40.1 TW: Sort of like the second order of thinking is where it starts to fall apart.

0:19:43.7 MH: But what, so that’s the second… So that was like number two, I think of four, like the, like to what end it ingests it and outputs a result. And maybe that’s going to get better with the added reasoning as more models come along. Is it going to be easy for somebody to just wave their hands and say, “Oh, well, you’re the second one,” it’ll ingest it and it’ll output results. And the results will be very, very reliable. It can count the number of bars in strawberry and it will always give the right answer. So is that an easy one to kind of check off and say, “Yeah, that’ll get fixed,” or.

0:20:21.1 MB: I would expect so, but we don’t know at the moment. With o1 doesn’t have… You can’t do file uploads. You can’t upload images. It’s just text in text out. You would expect that to improve. The next method is actually the using the assistance within the spreadsheet software itself. So Microsoft Copilot by way of example. This is a really difficult one to judge because the version that I wrote the paper on was the previous version. And then literally I think that the day the publisher signed it off, they announced wave two of Copilot, which has new capabilities. So the new version, which I haven’t yet tested is supposed to be able to actually write and execute Python on new spreadsheets and do more.

0:21:13.3 MB: It can actually interact with more of the tools and the functions because the old version could do that, but would often say that it had done a task and it hadn’t done a task, or it would tell you that it couldn’t do a task because there was too much data. Whereas I think those limitations on wave two have been lessened somewhat. So that’s really, if we think about where we… What the ideal is, I think this is the ideal. You want the chatbot in the environment where you’re working with that data and it’s able to actually execute almost agentically different functions, tools, tasks, directly within the file itself.

0:21:53.5 MK: So the, okay, the lay person’s version of this, rather than going to something separate, having to kind of ingest the data, yada, yada, yada, it’s built into it. And the difference is not only can I ask as, like, work as a “helper”, or whatever smart marketing person called it. It can also actually execute functions on your behalf. So it can do the doing, not just give you steps on how to do the doing.

0:22:22.5 MB: Yeah. And the first version of Copilot in Excel was supposed to be able to do some of the doing, but it did it wrong really more often than you would ever want the tool. It felt like it was released a little bit too early, which, you know, fair enough they’re iterating on these things really quickly, but yes, it should… And I think the more important thing with that is actually it can write and execute Python within the environment, which just adds a lot more capability to Excel.

0:22:52.0 MK: I’m really curious from like a product perspective, because that, what you’re talking about here basically implies that unless you were truly embedding this technology into the product roadmap in a really meaningful way, you will probably fall behind in any kind of tech company, which I hadn’t really thought about. Yeah. Okay. I’m having lots of light bulbs. Maybe I should do more recordings in the evening.

0:23:21.0 MH: But [laughter] but so I’m trying to figure out the limits of that. And this is also realizing that, again, the slew of pitches we’re getting for guests on the show, like the term Gen BI, like, “Oh, Gen AI is going to bring Gen BI,” which I’m trying to figure out of these three categories, like, where does it go from I’m a user of Excel, which means I’m a human being on the planet. And I’ve got a tool that gives me a little bit more of a natural language interface at kind of a micro level to go bite-sized along the leap where that gets… Where I’m not sure if that’s included is, “Oh, well, you’re just going to have a natural language interface to ask how much revenue did we get by channel last month?”

0:24:12.0 MH: That feels like more dangerous territory than saying, “Hey, can you extract… ” put a filter in so it flags anything that’s within the US as US and everything that’s rest of world, rest of world, which is a more specific instruction. Is that a spectrum, or is there a hard line where you’re crossing from a Copilot to my hope for wished for natural language interface to the data that is reliable?

0:24:48.2 MB: I think that’s where Microsoft would like Copilot for Power BI to get to. I don’t have any experience with that, particularly with this new wave of updates that are coming or have recently been announced. What I can say is that people that were using… That Power BI power users that were really interested in Copilot stress-tested it at the start of the year. And they described it as, one description said, “It’s not ready for CEO-level insights and presentation of data at the moment. It’s quite simple. If there are several steps of manipulation of the data that you need to do in order to get the insight that you’re after, it falls down. It doesn’t understand at the moment relationships between different entities in your data set.”

0:25:41.0 MK: So how are you seeing companies use this, or like analysts use it in their workflow? Kind of like, I know we’ve talked a little bit about the spreadsheets, but if you take the CEO example of amazing boss lady comes to you and says, “Sales are down, what’s happening?” And you go through that analyst workflow of solving the problem. Like, do you have kind of any intuition how people are really leveraging this in their day-to-day?

0:26:09.1 MB: The file ingestion, if you can get your data sources into ChatGPT, you can get, with the right prompting, really good insights really quickly. It can bring together multiple data sets. It can merge them, and it can, if you are very good at being able to describe your data and what you’re after, it can give you those graphs and those charts. How much people are doing that day to day? I am… I don’t see that a great deal. When I speak to people the most common experience I have is people going, “It didn’t quite do it for me.” Like, “It told me something was wrong.” So there’s an element of doubt that is seeded in people’s minds. And this is the thing.

0:26:55.8 MB: I think people are so used to using a spreadsheet, a calculator, something that gives numbers in numbers out, that makes sense and is always true. Where you have a tool that you use it 10 times and 2 times you go, “That’s not right.” It plants a seed of doubt in your mind. So I think until the hallucinations issue is cracked, we’re not quite going to get there. Everything feels, particularly on the data analysis side, I would say you can get surface-level insights, or you can get visualizations created very quickly. You can do data manipulation very quickly. If you’re someone that doesn’t know R and doesn’t already know how to manipulate the data, you can do that. It gives you those additional skill sets or access to those kinds of skills in a limited capacity. But how much people are using this in the day-to-day, I would dare say it’s more as an assistant to help them shortcut some code writing functions rather than really relying on it for insight.

0:28:07.7 MH: So, what is the fourth way? I feel like I want to dive back into, and I’m not sure whether I’m hitting a gap or whether I’m hitting a… Or just know that enough of our listeners will be like, “He said four, he said four.” So, and I want to break that tension, so.

0:28:19.9 MB: Yeah. I did say four.

0:28:22.5 MH: This show much like an AI gets lost along the way.

[laughter]

0:28:29.1 MB: There is a fourth and the fourth is actually less useful for analysts in some respect, but it’s actually adding an entirely new function to the spreadsheet itself. So a good example of this is Anthropic’s Claude model has a Claude for sheets add-on. So it’s a Google sheets add-on and it creates a new function, equals Claude, and then equals Claude open bracket, and then you can put your prompt in there. And then the return of that prompt is what populates that cell. So that means that you can assemble prompts using data input from other cells. And just like you would any other formula, you can build a formula and then send that to Claude and get Claude’s response straight back into the spreadsheet.

0:29:13.9 MH: Okay. So then now I’ve got, well, so one, I think one thing, and maybe it falls in the second kind of the file ingestion. It seems like there is a lot of using generative AI for analytics and it winds up, it’s really using generative AI for analytics engineering or for data engineering or for data observability or for… So there does seem like there’s a whole class of tools that are either kind of pipeline building assistance or data monitoring, which to me, that’s not the analysis, that’s the upstream. And it seems like just my gut is there 60 or 70 percent of things that get labeled as generative AI for the analysts are really generative AI for the data engineer or the analytics engineer; would you agree with that? I mean, are you seeing those where that’s getting labeled as for the analysts, but it’s not for analysis and that’s causing maybe some confusion in the market?

0:30:25.0 MB: Yeah, I think that’s probably true. And I’m just yet to see really strong use cases. And I guess you guys are more at the coalface of this than I am. I’m yet to see really strong examples where people have said, “We use generative AI for this level of insight and analysis and look how I did it. And that was all AI, ta-da,” you know, “We sprinkled in some data and got this amazing output; isn’t it great? Aren’t your jobs all doomed?” I’m not seeing that.

0:30:57.4 MK: See, I find there’s two groups of people. There are the people that are doing very cool shit and are doing it pretty quietly and not telling people. And that kind of tends to be the way that I, I mean, I’m not saying that I’m awesome, but like when I tell people I’ve taken a shortcut, I’m not going to tell people, let them think I did all the work. Not that I transcribed a voice note and then used it to write up my interview feedback and then pasted it in, in a really efficient manner. And everyone thought that my interview feedback was spot on. But then there’s the other group that are like, “Oh my God, we did AI, look what we did. Over here, over here.” And it’s like, it just seems to be really polar opposite. I don’t feel like we’re at that maturity of educating people about how to do it well and the pros and cons. Like it seems to be, I don’t know, like very polarizing at the moment, but maybe that’s just my lived experience.

0:31:55.5 MH: Having gotten buttonholed by somebody who was definitely the latter, it was a really long and exhausting conversation. And it really wasn’t a conversation, it was just him going on, which what was interesting is that when the probing that I did do with him was all of this really, really cool stuff was around rapidly pulling in data sources and being able to use web hooks and generate code to pull data sources in. And then with some iterating on the model, do some kind of mining of these multiple data sources to generate something, which was all very interesting, except the two things. And he, clearly this fella talks about it to anybody who will listen to him and does not stop. And then he started making these bold claims about any company could go from $0 to $10 million with one person with just AI.

0:33:01.2 MH: This is amazing. But then as I was probing, one, he admitted that all the stuff he did, he did actually have to talk to the subject matter experts to even figure out what it should be doing, which seemed like very much a human task. The thing that we didn’t get into that just seemed… He went on a great length about he does not have a technical background. And, but he also went on about how he didn’t have to write any code. He would just have this generate the C-Sharp and then he’d take it. And that felt like another component of, well, that seems sort of fragile. Like you’re… The playing around I’ve done with code generation is it’ll generate something, but it may not be clean or well-written or something that you want to have code that lives on for the ongoing production of any sort of ongoing deliverable. Like it equates… I talked to my son who’s a software engineer, and get him started on somebody who’s a crappy software engineer, or sometimes a faceless person in the past where he’s inheriting the downstream. And I’m like, “Oh my God, the ability for a machine with a temperature setting probabilistic in nature to generate code.

0:34:22.2 MH: That’s then going to live, that some poor analyst or some future generative AI needs to modify the code? How is that going to work?” Like, so the ability to say, if you’re going to write something that needs to have staying power, you can use the code assistant, but you probably need to know the code and maybe do some real iteration with it, as opposed to just saying, “I don’t need code.” I mean, I’ve had multiple people saying, “No one needs to learn to code. It’ll just generate it for you.” And I’m like, “Well, that’s somebody who’s never learned to code says that.”

0:35:01.3 MK: Can I challenge you a bit here? One of the things that is a little bit exciting, like when anyone asked me, I was like, I will say, “I’m not that technical. I can do a bit of programming, but I’m pretty shit.” And I’m way more shit than I was five years ago. And I know I have people in my team, for example, that they would share that and they would say, “That’s not my strength. Programming is not my strength.” They definitely have made endeavors to learn and will try their best, but they’re never going to be a gun programmer. They’re not the one that are like QA-ing 50,000 PRS from other engineers, data scientists, every day. One of the things that I find so challenging in data is to find people that are really good at figuring out how to answer a question, solve a problem.

0:35:49.0 MK: And it’s like the idea that you could have someone who might not be strong in a particular skill, like programming, but has this real superpower to understand and answer a business question and you can make them better at their job by kind of giving them this free buddy or coach or technical mentor. Like, I find that fucking exciting. That is cool.

0:36:18.7 MH: So I think you glossed over the thing that I think just gets glossed over all the time, which is you…

0:36:27.1 MK: Oh, tell me more.

0:36:27.2 MH: And those people, well, and all those people, they tried. They’re like, it’s not saying you have to be an elite level programmer. And I think the Cassie’s article that I mentioned, the Strawberry’s Paradox, is very much on that. It doesn’t mean you have to be a hot shit programmer, but discounting the effort to learn, learning SQL and learning VLOOKUP and learning what a left join, what a join is. If you completely skip that and say, “Oh, but somebody just has a great sense of answering business questions.” One, I think that’s actually often discounting what they’ve learned and part of their ability to answer the business questions from struggling through some of the technical aspects of it.

0:37:13.4 MH: Like, learning that stuff helps you understand how data works, right? If you do a thought experiment where somebody’s never had this sense of a join introduced to them and they just say combined data sets, you wind up with kind of the very casual business user where you’re having a very circular discussion because they don’t understand that you need a key to join two data sets. So I think we’re really good at skipping that point of saying, “No, no, no, this is gonna be great.” It’s like, well, no, but the people have to learn that that’s not their interest or their passion, but they’re learning very, very valuable aspects that go into their ongoing cognition to try to learn that technical stuff. That is part of who we are. And we’ve started saying, “Oh, we can skip that. You don’t need to do it at all.”

0:38:06.9 MK: But who’s saying that? Like I know that there is the odd person, but I would… Like, if anyone came to me and said, “I wanna be a data analyst,” and they’re like, “Guess what? AI’s out there, I don’t need to learn any programming.” I tell them to…

0:38:22.3 MH: That’s…

0:38:23.2 MK: I’d be like…

0:38:25.1 MH: A thousand percent what the fucking analytics translators were saying. I definitely dealt with…

0:38:29.3 MK: No, not at all.

0:38:30.8 MH: No. I’ve had people tell me… I had somebody who was a long time Google person adamantly tell me, “No one ever needs to learn code again.” And I was like… He was like, it’s just… He’s like, “No, you don’t need to ever do code.” And I’m like, “I can’t believe.” So, absolutely. And I will say going back pre-AI, there were people who were coming who were enamored with the idea of analysis and the idea of doing stuff with data, but said, “Ooh, I don’t wanna learn anything technical and this analytics translator role, I can just… ” And this was way before Gen AI. I’m not actually denigrating the analytics translator role, only if somebody thinks that means, “I don’t have to have any technical chops.” But I don’t know, Martin…

0:39:19.3 MK: ’cause for example, I know analytics translator is a very contentious thing, but when I think of it, I think of something very different to what you think of. And like, and this is the same situation…

0:39:32.8 MH: I’m looking at the people I know who have jumped on that role. Yeah, sorry.

0:39:36.8 MK: There is a spectrum though, at one end there is like, “I think I can do no programming and AI’s gonna do it all for me.” And then there’s like the middle people that I kind of talked about who might be like not great at it, bit rusty can use it. And then there’s the people that are like, “Why would I ever need AI? I’m such a great programmer.” But it’s always this spectrum.

0:39:54.9 MH: Hearing your description that you gave many people would jump to saying, “Moe thinks that if somebody looks at programming and is not interested in it, they can therefore completely ignore it. They’re… ”

0:40:08.7 MK: No.

0:40:09.1 MH: I just… But I think that that’s how that can be heard.

0:40:16.6 MK: But Martin’s talking a lot…

0:40:16.9 TW: Hold on. Hold on. Let’s…

0:40:17.6 MK: Wait, no, Martin is talking a lot about the fact that there are so many mistakes made. How do you recognize a mistake if you don’t know what the wrong or right output is?

0:40:27.1 TW: Right. And that’s the thing is…

0:40:29.6 MK: And I know I’m using wrong and right in a very binary sense, but…

0:40:33.5 TW: The people who are super excited about what this is gonna do, have probably never done it. And that’s what we’re probably kind of circling around right now. Okay. Let’s put up…

0:40:44.2 MH: No, I wanna say more about that.

0:40:46.5 TW: Circle back. Let’s bring it back. ’cause I think the place I want to go next is I want to talk, Martin, a little bit about sort of this idea, and this goes into a couple things. So one is sort of like Moe to your point, people who are using AI for various things aren’t really necessarily talking about it. And I think sometimes because there’s not a scalable process for the way that I might use an AI, I use it kind of in that first use case as sort of this sort of assistant coach-mentor thing. I’ll just pop open my little ChatGPT and be like, “Hey, I’m thinking about this. What are some ideas you’ve got?” And blah, blah, blah. I’ve never had ChatGPT look at data for me ever. I’ve had Claude look at a couple things, but I’ve never used them to do any kind of analysis of data. But I think this idea of exploring sort of the agentic process in analytics and sort of like, let’s step through some analysis scenarios and maybe look and see where we could leverage it. And Martin, where do you see kind of the best places for analysts to use AI in their day-to-day jobs? And that could be… We can give you some scenarios maybe to help with that.

0:42:00.2 MB: Tim, you look like you’re about to…

0:42:02.7 TW: No.

0:42:02.8 MB: No. Okay.

0:42:02.7 TW: No. I think I’m still… I’m waiting for my Generative AI to tell me that my blood pressure’s come down enough from my last rant to…

[laughter]

0:42:14.1 MB: Yeah, so give me the scenarios.

0:42:17.6 MH: Okay. Perfect. I’ll start with one. So, one that I think about all the time is a lot of what we do in analytics is really thinking through sort of basically an experiment of some kind, or some kind of analysis around this versus this. Like, “We’re gonna try this.” So, one of the really crucial skills for an analyst, I would say is being able to design a good experiment or think through the design of a good experiment. And so like, let’s say somebody comes to you on your team and is like, “Hey, we wanna run this campaign. We wanna see if this is a better way to do this.” Could you use AI to start to work through the answer to that question?

0:42:58.0 MB: I think that’s a… Like, the kind of design of experiments is really interesting, particularly with the new model, o1, with the reasoning capabilities. So it’s the chain of thought capabilities mean that it thinks, “thinks” he says in their quotes, through the process. And it can be a very good constructive critic. So, giving you feedback, giving you alternatives to the point that we made earlier about being generative. And it can come up with lots of things, very quickly. It can generate huge amounts of content, some good, some bad [laughter] So, if you want to just throw in an experiment design, or a hypothesis, or whatever it may be, and ask it to give you feedback and then just keep going. More feedback, more feedback, it will generate lots of it. Some of it you would disregard, but hidden amongst that there will be some gems.

0:44:00.8 MB: Now all of this is talking about the current state of these models. I think it’s not going to be long before, and actually I’m quite interested in o1 and where this goes with the reasoning capabilities. I think you’ll just be able to put in very simple prompts saying what it is that you are looking to achieve and it will spit out very high quality experiments that you can execute.

0:44:26.6 MH: Yeah, I asked the o1 model, how many golf balls will fit inside of 747? It did a pretty good breakdown, honestly [laughter] So, those kinds of reasoning problems. I think it does a good job with, I think Moe brought up something else about sort of, there’s a value in being able to take on and answer a question or understand and answer a business question effectively. And how could an analyst leverage AI to maybe even work with that kind of use case?

0:44:56.4 MB: Moe, can you unpack that for me slightly?

[laughter]

0:45:03.4 TW: You’re saying if somebody comes with a problem coming back and saying these are scenarios for…

0:45:11.1 MH: Well, like…

0:45:12.3 TW: Analysis approaches that…

0:45:12.4 MH: I don’t know.

0:45:11.9 MK: Actually this happened the other day. No shit [laughter] I said I would not talk about this, I said I would not talk about this and here I am talking about it. There was a CMO-type question, and someone put it into ChatGPT to say what are the possible hypotheses that might be an answer to this question. I was a little bit surprised at how good the answer was. And the reason that the answer actually was very good, and I found this with my own experimentation, is I find a lot of the responses I get come down to structuring things very logically. And so it’ll be like reason one, reason two, reason three, reason four, which as someone who ends up writing things into a lot of documents or like writeups, it then becomes a very easy structure to work with in terms of writing it up.

0:46:12.0 MK: And so I was like, “You know what? We are gonna just lean hard into this. We are gonna then tackle this,” Tim, you’ll love this, “Almost as analysis of competing hypotheses and be like, ‘Okay, these are the nine hypotheses that ChatGPT gave us. Let’s go through. Let’s try and knock them out. Let’s see what we can’t, what we have evidence against. What can we say is possibly responsible, partially responsible? What are the data that we have for each one?’” And that’s actually ended up how we ended up structuring our analysis, was based off the hypothesis generated from ChatGPT. There you have it folks. I said I wouldn’t say it and I did.

0:46:48.9 TW: But one, that’s back to that number one, I think the, in Martin’s list of four, like the coaching or mentoring to me. And I feel like that’s…

0:46:58.6 MK: Is it coaching and mentoring? I don’t know if that’s the same.

0:47:03.6 MB: Or it’s the same, whatever.

0:47:04.7 TW: To me that’s what… I mean, that is what Jim Stern has been kind of… I’m now multiple times seeing him due to various iterations where he’s saying, “Ask it for ideas.” And Michael, that’s what you were saying. I’ve used it for that. That…

0:47:21.7 MK: Okay. I… Sorry, Tim, I apologize profusely for interrupting, but I can’t stop my brain from thinking right now. I think of coaching and mentoring as helping you make something you’ve already got better, or get there faster. So for example, it might be like using a different function. It might be QA-ing the work or, you know, making the language more concise. Whereas I think ideation is almost its own separate category, which is distinct to coaching or mentor. Like, I don’t know, but that’s, maybe I’m being too…

0:47:55.2 TW: I don’t know, Martin, I mean, how would you define [laughter] it was … the four.

[laughter]

0:48:00.4 MB: Yeah. I thought of, when I thought about coaching and mentoring, helping you to ask better questions or think about things in different ways was part of that. So, I did see that kind of ideation of things being part of that kind of umbrella.

0:48:18.6 TW: But would you also, I mean, Moe with your, the, “Okay, these are nine, maybe two,” you could say, “These are garbage. I didn’t… ” one, iterating…. There were some that, but then also how would I actually validate that, right? Because there’s multiple ways to validate. I mean, you could take it farther and say, what data would I look at or to get a causal relationship to truly, if this, if my life depended on validating this hypothesis, number three in your list, what would you recommend that I do?

0:48:52.4 TW: Assume infinite resources. You know, I think, which all to me goes through a good iteration. But it’s interesting you asked it for, like, what are some hypotheses not what are the insights? What are the answers, right? You had it be that upstream piece and then, “Okay, we’re gonna put a human in the loop, who’s gonna say which of these are worth pursuing and how,” and hopefully someone was looking at it saying, “Some of these we just factually know there is no data that can validate that hypothesis already in existence. The only way I could do that is to generate some new data that the Generative AI doesn’t have access to. Because I need to run an experiment,” or, “I need to gather some data for my users,” or somewhere else. So, it’s in the process, but it’s not… I still feel like it gets treated as like, “Oh, oh, it’s this close, as it gets better, it’ll generate those nine things and they’ll be CMO-ready.” And it’s like, “No, it’s gonna generate those things and then we need humans and work in the process.”

0:50:00.2 TW: And I don’t wanna come like, hopefully I’m not coming across as anti-Generative AI. I just think there needs to be…

0:50:03.0 MH: Oh you are, Tim.

0:50:04.1 TW: Decision [laughter] Oh, I…

0:50:06.6 MH: You are. No, I’m just kidding.

0:50:06.9 TW: We’re gonna run the transcript of this through Claude and say…

0:50:13.0 MH: That’s right.

0:50:13.1 TW: Who’s the asshole?

0:50:13.8 MK: Yeah. It’s actually really interesting. Like Martin started this whole episode talking about the terrifying scenario we wouldn’t have jobs. And it’s funny, I am also using ChatGPT a lot at the moment for testing different ways to explain a technical concept to stakeholders. So, the other day I needed to describe probabilistic and deterministic-probabilistic, you can tell it’s … probabilistic and deterministic. And I was trying to test out, I actually had a few different models going against each other to figure out what was the best option. But it still comes back to that human component of me looking, knowing my stakeholders well enough, having a good understanding of what concepts they’re familiar with, or what terminology has stuck with them. So that will land. And then sometimes using different bits from different outputs to stitch it together. And yeah, I don’t know. I’m sure maybe when my kids grow up, maybe that step won’t exist, but for now, I definitely feel like I still need that.

0:51:13.9 MB: Well, you keep discounting that. Like, you keep discounting that like, “Oh, well maybe it’ll get to where it’s better.”

0:51:20.4 MK: It might. I’m not the future reader.

0:51:24.8 MB: Well, but I mean this is… This goes back, it’s not new that 10 years ago they were saying, “We’ll get to… I don’t need to learn R don’t need to learn Python. I don’t need to learn SQL, because the computer will just do it for me.” And it’s like the half life of getting partway there, it does something better, and then we have this world of optimism that says, “Oh well this other part that it can’t do now I’m sure it will get there. If I just wait, it will get there.” Like, I feel like there is a tendency to say, “I don’t need to become better at communicating, because I’m sure within six months it’ll just generate… Canva will introduce the next feature that it just says, ‘Here’s the data set, generate the slide deck,’” and then you spiral into, “I’m gonna lose my job.” As opposed to saying no… Like, knowing who the people are that you’re working with, that matters, which of these analogies would work better. What’s the fine-tuned right level? And it’s not that it’s not gonna continue to get better. I mean I’m terrible as a futurist, but I think that it’s like saying, “Oh, well maybe it’ll just do this for me within a few years, I feel like is… ” Yeah.

0:52:45.1 MK: Okay. Number one, I don’t think I’m discounting that stuff, but I just maybe don’t get quite as passionate about it. So, given Tim’s rant though about, you know, we all still need to learn programming skills, we all still need impeccable communication skills. Computers won’t solve the day.

0:53:00.1 MB: I did not say that.

0:53:01.9 MK: Okay. Now it’s just fun.

0:53:04.0 TW: Paraphrasing.

0:53:04.0 MK: Come on, come on.

0:53:06.7 MB: No, I’m… The thing is you’re putting it is… I mean.

0:53:06.9 MH: We’re gonna use ChatGPT to paraphrase what Tim said.

0:53:13.2 MB: You can’t. What I’m saying there’s value in this and then you put a label that Tim says you need to be perfect at this, that…

0:53:22.2 MK: Oh, come on.

0:53:23.3 MB: That is fucking annoying. Right?

0:53:23.4 MK: Okay. Sorry.

0:53:23.9 MB: I mean it’s not… You’re painting it.

0:53:27.1 MK: I take it back. I take it back.

0:53:27.4 TW: Can AI do this folks. I don’t think so.

0:53:31.9 MK: Okay. Tim and I are gonna be banned from being on a show together for a while. But what I was gonna say, Martin, is with the companies that you’re working with and the use cases that you are seeing, if you are starting out in the data space, you have finite time. You do have to choose where to spend your energy and your learning. I guess you probably have quite a good intuition of the direction of the industry and where it’s going. Where would you spend your energy, ’cause we always get this, we’re like, “What is the programming language I should learn? How much time should I spend on learning data visualization, or on communicating results? Or writing up analysis?” It’s like there are so many things to learn where to focus and knowing, I suppose, the pros and cons of AI, like, where would you spend your energy if you were new in the data space?

0:54:18.7 MB: So, full disclosure, I am not an analyst. So, giving career advice to future analysts is [laughter] You know, I’m not the most qualified there. But I think the fundamentals are always.

0:54:28.8 MH: Or maybe maybe you’re the most qualified. So [laughter]

0:54:31.0 MK: Yeah.

0:54:35.2 MB: There’s… As I mentioned earlier, being an expert in the field helps you get more quality content or quality outputs from the AI, you know the questions to ask to steer it. I also think from an AI perspective and from the Gen AI space, I just think there’s a really fundamental play with the tools, play with it, poke it, prod it, pull it to bits, and really look at the outputs that you’re getting to understand where those limits are within these tools. It’s very easy to just take it at face value. It’s an AI, surely it’s a computer. It’s told me the answer. And as I’ve mentioned earlier, this is clearly not true. We can fall asleep at the wheel if we just take the outputs at face value. So yes, from a data end, I would pursue the career or pursue the skillset completely ignoring that AI exists. And I would treat learning AI as a separate endeavor in and of itself, to understand what that is and what it, more importantly, isn’t at this moment in time.

0:55:44.6 MH: Yeah. That’s good. All right, we’ve gotta wrap up. This is interesting. I didn’t think we had any passion for this topic at all, but apparently we have quite a bit, so this is awesome. Well, one thing we like to do is go around the horn, share our last call, something that might be of interest to our audience. Martin, you’re our guest. Do you have a last call you’d like to share?

0:56:04.3 MB: Yeah. So there’s a Machine Learning Street Talk, a podcast about machine learning. They recently did an episode on: Is o1-preview reasoning? So the new OpenAI model is it actually reasoning? And it’s about an hour-and-a-half discussion going deep dive, quite philosophical in nature about what is reasoning? What is knowledge? Are the things that these language models doing, truly reasoning? And it’s really fascinating for anyone that’s interested in learning more about that.

0:56:35.3 MH: Nice. Awesome. Thank you.

0:56:37.2 TW: This is funny, Cassie, that same post by Cassie talks about how, I can’t remember which model that says thinking, and she was like, “It says thinking,” she’s like, “It’s not thinking, it’s kind of poking a little bit of fun at the human when it’s spinning around.” I was like, “Oh, I never thought about that.”

0:56:56.1 MH: Appear to be human. All right. Moe, what about you? What’s your last call?

0:57:02.4 MK: Okay. Mine’s a weird one. So, I am talking about something that has nothing to do with Gen AI. I’m doing a professional leadership course internally. I’m very lucky we have internal coaches at Canva that we get the opportunity to do this. And the topic we covered last week was about kind of like our leadership values and our, what’s called our leadership shadow, and I had written my leadership values a few years ago. I’d run it through some mentors and people that I’d chat with, and I was pretty happy with them. And of course I dusted them off the shelf and I looked at them and was like, “Yeah, shit.” I think what really stood out to me is that at the time I wrote them, they were all very aspirational and I would say very soft-skill-based. And I didn’t feel that I had something there that captured the team’s output or drive.

0:58:00.4 MK: And I realized over the last few years that is something that’s really important to me. So number one, this is a reminder if you do have leadership … go check on them. But the other thing that happened is we started talking about our leadership shadow. And so that’s where you say something’s important to you, but you maybe the way you behave doesn’t show up in the same way. And so an example, not a reflection of me at all, is you say that your team are the most important thing. You really care about everyone that you manage, but then you move your one-to-ones regularly or you reschedule the team meeting every month, or something like that. And so it’s about identifying where are you saying things are important, but your behavior is actually quite different if you were in the team and seeing that. And yeah, it was just like kind of a nice, I mean challenging exercise, but good exercise to see how you then overlay that with the values that really are true to you and how you’re gonna show up and make sure that you’re demonstrating that to the team.

0:58:57.4 MH: You should throw those into ChatGPT and say, “What is my leadership shadow?”

0:59:00.0 MK: I don’t think it knows me well enough yet.

0:59:02.9 TW: Yeah.

0:59:03.0 MH: But I bet it will, give it a couple weeks.

[overlapping conversation]

0:59:08.2 MK: I feel like maybe I should let Tim be in charge of the prompting and then maybe we would get some real gold there. [laughter]

0:59:13.5 MH: All right. Well Tim, what’s your last call?

0:59:16.9 TW: So, I’m gonna do a plug. We are just a little less than a year out from the… I’m gonna do a twofer. My first one’s a plug for the data connect conference, so it’s in early October of 2025. So, I’ve talked about it before. We’ve done promos for it. It’s dataconnectconf.com. But the call for speakers is already open. So if you are, or if you know someone who is a woman or a gender queer, gender non-conforming or non-binary individual who would have something to speak about at a data conference, consider putting in a pitch for that. It’s a great conference open to all to attend, just limited on who the speakers are. So that’s my plug for that conference and getting great content there. And then as my actual last call, which maybe does tie into this topic, there’s a guy named Peder Isager? Isager? I don’t know how to pronounce his last name, wrote a post called Eight Basic Rules for Causal Inference.

1:00:25.6 TW: What’s funny is the URL actually is seven basic rules for causal inference. So, I am really curious as to which one he thought he had not realized, but it gives simple little diagrams that actually made me… The first couple I’m like, “Yeah, knew that, knew that.” And then it got really interesting. So, when it comes to this topic we had today, I think causality is one of those things that is really kind of profound and tricky. And that was kind of a nice post with simple little diagrams that kind of make you think, “Oh, this is why all the answers are not just in the data that I’ve already collected.” So, eight basic rules for causal inference. Michael, what’s your last call?

1:01:08.8 MH: Well, in the spirit of this topic, a couple of people that I know very well ’cause they used… I hired them both and they used to work for me. They’ve started a startup in the AI space called Moonbird, moonbird.ai. And they are building agentic tools and services and things like that. But their first product is around an agent or something… An AI agent for specifically looking at Adobe Analytics implementation. So, if I were walking into a situation where I was looking at an Adobe implementation today, I would be using that tool to bring me up to speed, give me information, provide me some knowledge.

1:01:46.0 MH: So, if you’re in that space, it’s a great little tool for that. So big shout out to the Moonbird team over there. All right, well Martin, thank you so much for coming on the podcast. Who knew that little networking session, at Marketing Analytics Summit would eventually lead to this? [laughter] Martin and I were at a table together at Marketing Analytics Summit. We got to introduce ourselves and here we are. So thank you, Martin.

1:02:13.1 MB: Thank you. Yeah, it was some good dim sum we had. [laughter]

1:02:17.3 MH: Yeah. That’s right. All right. And then of course no show would be complete without a huge shout out to Josh Crowhurst, our producer, does so much behind the scenes to make things happen. Josh, thank you. And of course, big shout out. Thank you to Tim and Moe, my co-hosts, for bringing so much life and passion to this episode.

1:02:36.2 MK: Arguing. You mean arguing.

1:02:38.1 MH: Yeah. Well, you know, I asked ChatGPT like, “Give me a positive spin on all this bullshit.” [laughter]

1:02:47.8 MH: All right. Well this is an awesome topic and obviously what I think is super interesting and obviously growing and becoming more and more a part of the conversation. So I think this is probably not the first time or the last time we’ll talk about it on this podcast, but I like the start we got today. So again, thank you Martin. And again, I think as you’re going out there, we’d love to hear from you. What are you using AI for? What kinds of things do you see in your work? It’s easy to reach out to us. You can get a hold of us on the measure chat group or on LinkedIn. And we also now have a YouTube channel, so you can check us out there as well.

1:03:21.5 MH: So, go ahead and reach out to us. We’d love to hear from you, unless you’re pitching an AI-related topic or host from PR auto-bot-type situation. We do get a lot of those emails, but we’ll do the picking. Thank you very much and I think we got the great person for this today. All right, anyways, I know that as you’re going through life, you’re gonna be using AI more and more and so keep the good work going. And I know I speak for both of my co-hosts, Tim and Moe when I say, keep analyzing.

1:04:00.7 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions, and questions on Twitter at @analyticshour, on the web at analyticshour.io, our LinkedIn group and the Measure Chat Slack group. Music for the podcast by Josh Crowhurst.

[background conversation]

The post #257: Analyst Use Cases for Generative AI appeared first on The Analytics Power Hour: Data and Analytics Podcast.

  continue reading

13 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드