Player FM 앱으로 오프라인으로 전환하세요!
Oracle AI in Fusion Cloud Human Capital Management
Manage episode 450954874 series 3560727
In this special episode of the Oracle University Podcast, Lois Houston and Nikita Abraham, along with Principal HCM Instructor Jeff Schuster, delve into the intersection of HCM and AI, exploring the practical applications and implications of this technology in human resources.
Jeff shares his insights on bias and fairness, the importance of human involvement, and the need for explainability and transparency in AI systems. The discussion also covers the various AI features embedded in HCM and their impact on talent acquisition, performance management, and succession planning.
Oracle AI in Fusion Cloud Human Capital Management: https://mylearn.oracle.com/ou/learning-path/oracle-ai-in-fusion-cloud-human-capital-management-hcm/136722
Oracle Fusion Cloud HCM: Dynamic Skills: https://mylearn.oracle.com/ou/course/oracle-fusion-cloud-hcm-dynamic-skills/116654/
Oracle University Learning Community: https://education.oracle.com/ou-community
LinkedIn: https://www.linkedin.com/showcase/oracle-university/
Twitter: https://twitter.com/Oracle_Edu
Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode.
--------------------------------------------------------
00:00
Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!
00:26
Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs here at Oracle University, and with me, is Nikita Abraham, Team Lead of Editorial Services.
Nikita: Hi everyone! Last week’s conversation was all about Oracle Database 23ai backup and recovery, where we dove into instance recovery and effective recovery strategies. Today’s episode is a really special one, isn’t it, Lois?
00:53
Lois: It is, indeed, Niki. Of course, all of our AI episodes are special. But today, we have our friend and colleague Jeff Schuster with us. I think our listeners are really going to enjoy what Jeff has to share with us.
Nikita: Yeah definitely! Jeff is a Principal HCM Instructor at Oracle University. He recently put together this really fantastic course on MyLearn, all about the intersection of HCM and AI, and that’s what we want to pick his brain about today. Hi Jeff! We’re so excited to have you here.
01:22
Jeff: Hey Niki! Hi Lois! I feel special already. Thanks you guys so much for having me.
Nikita: You’ve had a couple of busy months, haven’t you?
01:29
Jeff: I have! It’s been a busy couple of months with live classes. I try and do one on AI in HCM at least once a month or so so that we can keep up with the latest/greatest stuff in that area. And I also got to spend a few days at Cloud World teaching a few live classes (about artificial intelligence in HCM, as a matter of fact) and meeting our customers and partners. So yeah, absolutely great week. A good time was had by me.
01:55
Lois: I’m sure. Cloud World is such a great experience. And just to clarify, do you think our customers and partners also had a good time, Jeff? It wasn’t just you, right?
Jeff: Haha! I don’t think it was just me, Lois. But, you know, HCM is always a big deal, and now with all the embedded AI functionality, it really wasn’t hard to find people who wanted to spend a little extra time talking about AI in the context of our HCM apps. So, there are more than 30 separate AI-powered features in HCM. AI features for candidates to find the right jobs; for hiring managers to find the right candidates; skills, talent, performance management, succession planning— all of it is there and it really covers everything across the Attract/Grow/Keep buckets of the things that HR professionals do for a living. So, anyway, yeah, lots to talk about with a lot of people!
There’s the functional part that people want to know about—what are these features and how do they work? But obviously, AI carries with it all this cultural significance these days. There’s so much uncertainty that comes from this pace of development in that area. So in fact, my Cloud World talk always starts with this really silly intro that we put in place just to knock down that anxiety and get to the more practical, functional stuff.
03:11
Nikita: Ok, we’re going to need to discuss the functional stuff, but I feel like we’re getting a raw deal if we don’t also get that silly intro.
Lois: She makes a really good point.
Jeff: Hahaha! Alright, fair enough. Ok, but you guys are gonna have to imagine I’ve got a microphone and a big room and a lot of echo.
AI is everywhere. In your home. In your office. In your homie’s home office.
03:39
Lois: I feel like I just watched the intro of a sci-fi movie.
Jeff: Yeah. I’m not sure it’s one I’d watch, but I think more importantly it’s a good way to get into discussing some of the overarching things we need to know about AI and Oracle’s approach before we dive into the specific features, so you know, those features will make more sense when we get there?
03:59
Nikita: What are these “overarching” things?
Jeff: Well, the things we work on anytime we’re touching AI at Oracle. So, you know, it starts with things like Bias and Fairness. We usually end up in a pretty great conversation about things like how we avoid bias on the front end by making sure we don’t ingest things like bias-generating content, which is to say data that doesn’t necessarily represent bias by itself, but could be misused. And that pretty naturally leads us into a talk about guardrails.
Nikita: Guardrails?
Jeff: Yeah, you can think of those as checkpoints. So, we’ve got rules about ingestion and bias. And if we check the output coming out of the LLM to ensure it complied with the bias and fairness rules, that’s a guardrail. So, we do that. And we do it again on the apps side. And so that’s to say, even though it’s already been checked on the AI side, before we bring the output into the HCM app, it’s checked again. So another guardrail.
04:58
Lois: How effective is that? The guardrails, and not taking in data that’s flagged as bias-generating?
Jeff: Well, I’ll say this: It’s both surprisingly good, and also nowhere near good enough.
Lois: Ok, that’s as clear as mud. You want to elaborate on that?
Jeff: Haha! I think all it means is that approach does a great job, but our second point in the whole “standards” discussion is about the significance of having a human in the loop. Sometimes more than one, but the point here is that, particularly in HCM, where we’re handling some really important and sensitive data, and we’re introducing really powerful technology, the H in HCM gets even more important. So, throughout the HCM AI course, we talk about opportunities to have a human in the loop. And it’s not just for reviewing things. It’s about having the AI make suggestions, and not decisions, for example. And that’s something we always have a human in the loop for all the time. In fact, when I started teaching AI for HCM, I always said that I like to think of it is as a great big brain, without any hands.
06:00
Nikita: So, we’re not talking about replacing humans in HCM with AI.
Jeff: No, but we’re definitely talking about changing what the humans do and why it’s more important than ever what the humans do.
So, think of it this way, we can have our embedded AI generate this amazing content, or create really useful predictions, whatever it is that we need. We can use whatever tools we want to get there, but we can still expect people to ask us, “Where did that come from?” or “Does this account for [whatever]?”. So we still have to be able to answer that. So that’s another thing we talk about as kind of an overarching important concept: Explainability and Transparency.
06:41
Nikita: I’m assuming that’s the part about showing our work, right? Explaining what's being considered, how it's being processed, and what it is that you're getting back.
Jeff: That’s exactly it. So we like to have that discussion up front, even before we get to things like Gen and Non-Gen AI, because it’s great context to have in mind when you start thinking about the technology. Whenever we’re looking at the tech or the features, we’re always thinking about whether people are appropriately involved, and whether people can understand the AI product as well as they need to.
07:11
Lois: You mentioned Gen and Non-Gen AI. I’ve also heard people use the term “Classic AI.” And lately, a lot more about RAG and Agents. When you're teaching the course, does everybody manage to keep all the terminology straight?
Jeff: Yeah, people usually do a great job with this. I think the trick is, you have to know that you need to know it, if that makes sense.
Lois: I think so, but why don’t you spell it out for us.
Jeff: Well, the temptation is sometimes to leave that stuff to the implementers or product developers, who we know need to have a deep understanding of all of that. But I think what we’ve learned is, especially because of all the functional implications, practitioners, product owners, everybody needs to know it too. If for no other reason so they can have more productive conversations with their implementers.
You need to know that Classic or Non-Generative AI leverages machine learning, and that that’s all you need in order to do some incredibly powerful things like predictions and matching.
So in HCM, we’re talking about things like predicting time to hire, identifying suggested candidates for job openings, finding candidates similar to ones you already like, suggesting career paths for employees, and finding recommended successors. All really powerful matching stuff. And all of that stuff uses machine learning and it’s certainly AI, but none of that uses Generative AI to do that because it doesn’t need to.
08:38
Nikita: So how does that fit in with all the hype we’ve been hearing for a long time now about Gen AI and how it’s such a transformative technology that’s going to be more impactful than anything else?
Jeff: Yeah, and that can be true too. And this is what we really lean into when we do the AI in HCM course live. It’s much more of a “right AI for the right job” kind of proposition.
Lois: So, just like you wouldn’t use a shovel to mix a cake. Use the right tool for the job. I think I’ve got it. So, the Classic AI is what’s driving those kinds of features in HCM? The matching and recommendations?
Jeff: Exactly right. And where we need generative content, that’s where we add on the large language model capability. With LLMs, we get the ability to do natural language processing. So it makes sense that that’s the technology we’d use for tasks like “write me a job description” or “write me performance development tips for my employee”.
09:33
Nikita: Ok, so how does that fit in with what Lois was asking about RAG and Agents? Is that something people care about, or need to?
Jeff: I think it’s easiest to think about those as the “what’s next” pieces, at least as it relates to the embedded AI. They kind of deal with the inherent limitations of Gen and Non-Gen components. So, RAG, for example - I know you guys know, but your listeners might not...so what’s RAG stand for?
Lois & Nikita: Retrieval. Augmented. Generation.
Jeff: Hahaha! Exactly. Obviously. But I think everything an HCM person needs to know about that is in the name. So for me, it’s easiest to read that one backwards. Retrieval Augmented Generation. Well, the Generation just means it’s more generative AI. Augmented means it’s supplementing the existing AI. And Retrieval just tells you that that’s how it’s doing it. It’s going out and fetching something it didn’t already have in order to complete the operation.
10:31
Lois: And this helps with those limitations you mentioned?
Nikita: Yeah, and what are they anyway?
Jeff: I think an example most people are familiar with is that large language models are trained on this huge set of information. To a certain point. So that model is trained right up to the point where it stopped getting trained. So if you’re talking about interacting with ChatGPT, as an example, it’ll blow your doors off right up until you get to about October of 2023 and then, it just hasn’t been trained on things after that. So, if you wanted to have a conversation about something that happened after that, it would need to go out and retrieve the information that it needed.
For us in HCM, what that means is taking the large language model that you get with Oracle, and using retrieval to augment the AI generation for the things that the large language model wouldn’t have had.
11:22
Nikita: So, things that happened after the model was trained? Company-specific data? What kind of augmenting are you talking about?
Jeff: It’s all of that. All those things happen and it’s anything that might be useful, but it’s outside the LLM’s existing scope. So, let’s do an example. Let’s say you and Lois are in the market to hire someone. You’re looking for a Junior Podcast Assistant. We’d like the AI in HCM to help, and in order to do that, it would be great if it could not just generate a generic job description for the posting, but it could really make it specific to Oracle. Even better, to Oracle University.
So, you’d need the AI to know a few more things in order to make that happen. If it knows the job level, and the department, and the organization—already the job posting description gets a lot better. So what other things do you think it might need to know?
12:13
Lois: Umm I’m thinking…does it need to account for our previous hiring decisions? Can it inform that at all?
Jeff: Yes! That’s actually a key one. If the AI is aware not only of all the vacancies and all of the transactional stuff that goes along with it (like you know who posted it, what’s its metadata, what business group it was in, and all that stuff)...but it also knows who we hired, that’s huge.
So if we put all that together, we can start doing the really cool stuff—like suggesting candidates based not only on their apparent match on skills and qualifications, but also based on folks that we’ve hired for similar positions. We know how long it took to make those hires from requisition open to the employee’s first start date. So we can also do things like predicting time to hire for each vacancy we have with a lot more accuracy. So now all of a sudden, we’re not just doing recruiting, but we have a system that accounts for “how we do it around here,” if that makes any sense.
But the point is, it’s the augmented data, it’s that kind of training that we do throughout ingestion, going out to other sources for newer or better information, whatever it is we need. The ability to include it alongside everything that’s already in the LLM, that’s a huge deal.
13:31
Nikita: Ok, so I think the only one we didn’t get to was Agents.
Jeff: Yeah, so this one is maybe a little less relevant in HCM—for now anyway. But it’s something to keep an eye on. Because remember earlier when I described our AI as having a great big brain but no hands?
Lois: Yeah...
Jeff: Well, agents are a way of giving it hands. At least for a very well-defined, limited set of purposes. So routine and repetitive tasks. And for obvious reasons, in the HCM space, that causes some concerns. You don’t want, for example, your AI moving people forward in the recruiting process or changing their status to “not considered” all by itself. So going forward, this is going to be a balancing act. When we ask the same thing of the AI over and over again, there comes a point where it makes sense to kind of “save” that ask. When, for example, we get the “compare a candidate profile to a job vacancy” results and we got it working just right, we can create an agent. And just that one AI call that specializes in getting that analysis right. It does the analysis, it hands it back to the LLM, and when the human has had what they need to make sure they get what they need to make a decision out of it, you’ve got automation on one hand and human hands on the other...hand.
14:56
Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like large language models, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.com to find out more.
15:26
Nikita: Welcome back! Jeff, you’ve mentioned the “Time to Hire” feature a few times? Is that a favorite with people who take your classes?
Jeff: The recruiting folks definitely seem to enjoy it, but I think it’s just a great example for a couple of reasons. First, it’s really powerful non-generative AI. So it helps emphasize the point around the right AI for the right job. And if we’re talking about things in chronological order, it’s something that shows up really early in the hire-to-retire cycle.
And, you know, just between us learning nerds, I like to use Time to Hire as an early example because it gets folks in the habit of working through some use cases. You don’t really know if a feature is going to get you what you need until you’ve done some of that.
So, for example, if I tell you that Time to Hire produces an estimated number of days to your first hire. And you’re still Lois, and you’re still Niki, and you’re hiring for a Junior Podcast Assistant. So why do you care about time to hire? And I’m asking you for real—What would you do with that prediction if you had it?
16:29
Nikita: I guess I’d know how long it is before I can expect help to arrive, and I could plan my work accordingly.
Jeff: Absolutely. What else. What could you do with a prediction for Time to Hire?
Lois: Think about coverage?
Jeff: Yeah! Exactly the word I was looking for. Say more about that.
Lois: Well, if I know it’s gonna be three months before our new assistant starts, I might be able to plan for some temporary coverage for that work. But if I had a prediction that said it’s only going to be two weeks before a new hire could start, it probably wouldn’t be worth arranging temporary coverage.
Niki can hold things down for a couple of weeks.
Jeff: See, I’m positive she could! That’s absolutely perfect! And I think that’s all you really need to have in terms of prerequisites to understand any of the AI features in HCM. When you know what you might want to do with it, like predicting the need for temp cover, and you’ve got everything we talked about in the foundation part of the course—the Gen and the Classic, all that stuff, you can look at a feature like Time to Hire and then you can probably pick that up in 30 seconds.
17:29
Nikita: Can we try it?
Jeff: Sure! I mean, you know, we’re not looking at screens for this conversation, but we can absolutely try it. You’re a recruiter. If I tell you that Time to Hire is a feature that you run into on the job requisition and it shows you just a few editable fields, and then of course, the prediction of the number of days to hire—tell me how you think that feature is going to work when you get there.
Lois: So, what are the fields? And does it matter?
Jeff: Probably not really, but of course you can ask. So, let me tell you. Ready? The fields—they are these. Requisition Title, Location, and Education Level.
Nikita: Ok, well, I have to assume that as I change those things… like from a Junior Podcast Assistant to a Senior Podcast Assistant, or change the location from Redwood Shores to Detroit, or change the required education, the time to hire is going to change, right?
Jeff: 100%, exactly. And it does it in real time as you make those changes to those values. So when you pick a new location, you immediately get a new number of days, so it really is a useful tool.
But how does it work? Well, we know it’s using a few fields from the job requisition, but that’s not enough. Besides those fields, what else would you need in order to make this prediction work?
18:43
Lois: The part where it translates to a number of days. So, this is based on our historic hiring data? How long it took us to hire a podcast assistant the last time?
Jeff: Yep! And now you have everything you need. We call that “historic data from our company” bit “ingestion,” by the way. And there’s always a really interesting discussion around that when it comes up in the course. But it’s the process we use to bring in the HCM data to the AI so it can be considered or predictions exactly like this.
Lois: So it’s the HCM data making the AI smarter and more powerful.
Nikita: And tailored.
Jeff: Exactly, it’s all of that. And obviously, the HCM is better because we’ve given it the AI. But the AI is also better because it has the HCM in it.
But look, I was able to give you a quick description of Time to Hire, and you were able to tell me what it does, which data it uses, and how it works in just a few seconds.
So, that’s kind of the goal when we teach this stuff. It’s getting everybody ready to be productive from moment #1 because what is it and how does it work stuff is already out of the way, you know?
19:52
Lois: I do know!
Nikita: Can we try it with another one?
Jeff: Sure! How about we do...Suggested Candidates.
Lois: And you’re going to tell us what we get on the screen, and we have to tell you how it works, right?
Jeff: Yeah, yeah, exactly. Ok—Suggested Candidates. You’re a recruiter or a hiring manager. You guys are still looking for your Junior Podcast Assistant. On the requisition, you’ve got a section called Suggested Candidates. And you see the candidate’s name and some scores.
Those scores are for profile match, skills match, experience match. And there’s also an overall match score, and the highest rated people you notice are sorted to the top of the list. So, you with me so far?
Lois: Yes!
Jeff: So you already know that it’s suggesting candidates. But if you care about explainability and transparency like we talked about at the start, then you also care about where these suggested candidates came from. So let’s see if we can make progress against that. Let’s think about those match scores. What would you need in order to come up with match scores like that?
20:54
Nikita: Tell me if I’m oversimplifying this, but everything about the job on the requisition, and everything about the candidate? Their skills and experience?
Jeff: Yeah, that’s actually simplified pretty perfectly. So in HCM, the candidate profile has their skills and experience, and the req profile has the req requirements.
Lois: So we’re comparing the elements of the job profile and the person/candidate profile. And they’re weighted, I assume?
Jeff: That’s exactly how it works. See, 30 seconds and you guys are nailing these! In fairness, when we discuss these things in the course, we go into more detail. And I think it’s helpful for HCM practitioners to know which data from the person and the job profiles is being considered (and sometimes just as important, which is not being considered). And don’t forget we’re also considering our ingested data. Our previously selected candidates.
21:45
Lois: Jeff, can I change the weighting? If I care more about skills than experience or education, can I adjust the weighting and have it re-sort the candidates?
Jeff: Super important question. So let me give you the answer first, which is “no.” But because it’s important, I want to tell you more. This is a discussion we have in the class around Oracle’s Embedded vs. Custom AI. And they’re both really important offerings. With Embedded, what we’re talking about are the features that come in HCM like any other feature.
They might have some enablement steps like profile options, and there’s an activation panel. But essentially, that’s it. There’s no inspection panel for you to open up and start sticking your screwdriver in there and making changes. Believe it or not, that’s a big advantage with Embedded AI, if you ask me anyway.
Nikita: It’s an advantage to not be able to configure it?
Jeff: In this context, I think you can say that it is. You know, we talk about the advantages about the baked-in, Embedded AI in this course, but one of the key things is that it’s pre-built and pre-tested. And the big one: that it’s ready to use on day one. But one little change in a prompt can have a pretty big butterfly effect across all of your results. So, Oracle provides the Embedded AI because we know it works because we’ve already tested it, and it’s, therefore, ready on day one.
And I think that story maybe changes a little bit when you open up the inspection panel and bust out that screwdriver. Now you’re signing up to be a test pilot. And that’s just fundamentally different than “pre-built and ready on day one.” Not that it’s bad to want configuration.
23:24
Lois: That’s what the Custom AI path and OCI are about though, right? For when customers have hyper-specific needs outside of Oracle’s business processes within the apps, or for when that kind of tuning is really required. And your AI for HCM course—that focuses on the Embedded AI instead of Custom, yes?
Jeff: That is exactly it, yes.
Nikita: You said there are about 30 of these AI features across HCM. So, when you teach the course, do you go through all of them or are there favorites? Ones that people want to spend more time on so you focus on those?
Jeff: The professional part of me wants to tell you that we do try to cover all of them, because that explainability and transparency business we talked about at the beginning. That’s for real, so I want our customers to have that for the whole scope.
24:12
Nikita: The professional part? What’s the other part?
Jeff: I guess that’s the part that says sure, we need to hit all of them. But some of them are just inherently more fun to work on. So, it’s usually the learners who drive that in the live classes when they get into something, that’s where we spend the most time. So, I have my favorites too. The learners have their favorites. And we spend time where it’s everybody’s favorite.
Lois: Like where?
Jeff: Ok, so one is far from the most complex one, but I think it’s really elegant in its simplicity. And it’s the Celebrate feature, where we do employee recognition. There’s an AI Assist available there. So when it’s time to recognize a colleague, you just need to enter the headline or the title, and the AI takes it from there and just writes up the recognition.
24:56
Lois: What about that makes it a good example, Jeff? You said it’s elegant. What do you mean?
Jeff: I think it’s a few things. So, start with the prompt. It’s just the one line—just the headline. And that’s your one input. So, type in the headline, get the recognition below. It’s a great demonstration of not just the simplicity, but the power we get out of that simplicity. I always ask it to recognize my employees for implementing AI features in Oracle HCM, just to see what it comes up with.
When it tells the employee that they’re helping the company by automating routine tasks, bringing efficiency to the HR department, and then launches into specific examples of how AI features help in HCM, it really is pretty incredible. So, it’s a simple demo, but it explains a lot about how the Gen AI works.
Lois: That’s really cool.
25:45
Nikita: So this one is generative AI. It’s using the large language model to create the recognition based on the prompt, which is basically just whatever you entered in the headline. But how does that help explain how Gen AI works in HCM?
Jeff: Well, let’s take our simple prompt for example. There’s a lot happening behind the scenes. It’s taking our prompt, it’s doing its LLM thing, but before it’s done, it’s creating the results in a very specific way. An employee recognition reads really differently than a job description. So, I usually describe this as the hidden part of our prompt. The visible part is what we typed. But it needs to know things like our desired output format. Make sure to use the person’s name, summarize the benefits, and be sure to thank them for their contribution, that kind of stuff. So, those things are essentially hard-coded into the page. And that’s to say, this is another area where we don’t get an inspection panel that lets us go in and tweak the prompt.
26:42
Nikita: And that’s generally how generative AI works?
Jeff: Pretty much. Wherever you see an AI Assist button in HCM, that’s more or less what’s going on. And so when you get to some of the other more complex features, it’s helpful to know that that is what’s going on.
Lois: Like where?
Jeff: Well, it works that way for the About Me part of your employee profile, for goal creation in performance, and I think a really great example is in performance, where managers are providing the competency development tips.
So the prompt there is a little more complex there because it involves the employee’s proficiency rating instead of free text. But still, pretty straightforward. You’re gonna click AI Assist and it’s gonna generate all the development tips for any specific competency listed for that employee. Good development tips. Five of them. Nicely formatted with bullet points. And these aren’t random words assembled by an AI. So they conform to best practices in the development of competencies. So, something is telling the LLM to give us results that are that good, in that particular way.
So, it’s just another good example of the work AI is doing while protected behind the inspection panel that doesn’t exist. So, the coding of that page, in combination with what the LLM generates and the agent that it uses, is what produces the result. That’s generally the approach. In the class, we always have a good time digging into what must be going on behind that inspection panel. Generally speaking, the better feel we have for what’s going on on these pages, the better we’re able to get the results we want, even without having that screwdriver out.
28:21
Nikita: So it’s time well-spent, looking at all the individual features?
Jeff: I think so, especially if you’re anticipating really using any of them. So, the good news is, once you learn a few of them and how they work, and what they’re best at, you stop being surprised after a while. But there are always tips and tricks. And like we talked about at the top, explainability and transparency are absolutely key. So, as much as I’m not a fan of the phrase, I do think this is kind of a “knowledge is power” kind of situation.
28:51
Nikita: Sadly, we’re just about out of time for this episode.
Lois: That’s too bad, I was really enjoying this. Jeff, you were just talking about knowledge—where can we get more?
Jeff: Well, like you mentioned at the start, check out the AI in HCM course on MyLearn. It’s about an hour and a half, but it really is time well spent. And we get into detail on everything the three of us discussed here today, and then we have demoscussions of every feature where we show them and how they work and which data they’re using and a whole bunch more. So, there’s that. Plus, I hear the instructor is excellent.
Lois: I can vouch for that!
Jeff: Well, then you should definitely look into Dynamic Skills. Different instructor. But we have another course, and again I think about an hour and a half, but when you’re done with the AI course, I always feel like Dynamic Skills is where you really wanna go next to really flesh out all the Talent Management ideas that got stirred up while you were having a great time in the AI course.
And then finally, the live classes. It’s always really fun to take live questions while we talk about AI in HCM.
29:54
Nikita: Thanks, Jeff! This has been really interesting.
Lois: Yeah, thanks for being here, Jeff. We’ve loved having you on.
Jeff: Thank you guys so much for having me. It’s been a pleasure.
Lois: If you want to learn more about what we discussed, go to the show notes for today’s episode. You’ll find links to the AI for Human Capital Management and Dynamic Skills courses that Jeff mentioned so you can check them out. You can also head over to mylearn.oracle.com to find the live sessions for MyLearn subscribers that Jeff conducts.
Nikita: Join us next week as we kick off our “Best of 2024” season, where we’ll be revisiting some of our most popular episodes of the year. Until then, this is Nikita Abraham…
Lois: And Lois Houston, signing off!
30:35
That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
93 에피소드
Manage episode 450954874 series 3560727
In this special episode of the Oracle University Podcast, Lois Houston and Nikita Abraham, along with Principal HCM Instructor Jeff Schuster, delve into the intersection of HCM and AI, exploring the practical applications and implications of this technology in human resources.
Jeff shares his insights on bias and fairness, the importance of human involvement, and the need for explainability and transparency in AI systems. The discussion also covers the various AI features embedded in HCM and their impact on talent acquisition, performance management, and succession planning.
Oracle AI in Fusion Cloud Human Capital Management: https://mylearn.oracle.com/ou/learning-path/oracle-ai-in-fusion-cloud-human-capital-management-hcm/136722
Oracle Fusion Cloud HCM: Dynamic Skills: https://mylearn.oracle.com/ou/course/oracle-fusion-cloud-hcm-dynamic-skills/116654/
Oracle University Learning Community: https://education.oracle.com/ou-community
LinkedIn: https://www.linkedin.com/showcase/oracle-university/
Twitter: https://twitter.com/Oracle_Edu
Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode.
--------------------------------------------------------
00:00
Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!
00:26
Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs here at Oracle University, and with me, is Nikita Abraham, Team Lead of Editorial Services.
Nikita: Hi everyone! Last week’s conversation was all about Oracle Database 23ai backup and recovery, where we dove into instance recovery and effective recovery strategies. Today’s episode is a really special one, isn’t it, Lois?
00:53
Lois: It is, indeed, Niki. Of course, all of our AI episodes are special. But today, we have our friend and colleague Jeff Schuster with us. I think our listeners are really going to enjoy what Jeff has to share with us.
Nikita: Yeah definitely! Jeff is a Principal HCM Instructor at Oracle University. He recently put together this really fantastic course on MyLearn, all about the intersection of HCM and AI, and that’s what we want to pick his brain about today. Hi Jeff! We’re so excited to have you here.
01:22
Jeff: Hey Niki! Hi Lois! I feel special already. Thanks you guys so much for having me.
Nikita: You’ve had a couple of busy months, haven’t you?
01:29
Jeff: I have! It’s been a busy couple of months with live classes. I try and do one on AI in HCM at least once a month or so so that we can keep up with the latest/greatest stuff in that area. And I also got to spend a few days at Cloud World teaching a few live classes (about artificial intelligence in HCM, as a matter of fact) and meeting our customers and partners. So yeah, absolutely great week. A good time was had by me.
01:55
Lois: I’m sure. Cloud World is such a great experience. And just to clarify, do you think our customers and partners also had a good time, Jeff? It wasn’t just you, right?
Jeff: Haha! I don’t think it was just me, Lois. But, you know, HCM is always a big deal, and now with all the embedded AI functionality, it really wasn’t hard to find people who wanted to spend a little extra time talking about AI in the context of our HCM apps. So, there are more than 30 separate AI-powered features in HCM. AI features for candidates to find the right jobs; for hiring managers to find the right candidates; skills, talent, performance management, succession planning— all of it is there and it really covers everything across the Attract/Grow/Keep buckets of the things that HR professionals do for a living. So, anyway, yeah, lots to talk about with a lot of people!
There’s the functional part that people want to know about—what are these features and how do they work? But obviously, AI carries with it all this cultural significance these days. There’s so much uncertainty that comes from this pace of development in that area. So in fact, my Cloud World talk always starts with this really silly intro that we put in place just to knock down that anxiety and get to the more practical, functional stuff.
03:11
Nikita: Ok, we’re going to need to discuss the functional stuff, but I feel like we’re getting a raw deal if we don’t also get that silly intro.
Lois: She makes a really good point.
Jeff: Hahaha! Alright, fair enough. Ok, but you guys are gonna have to imagine I’ve got a microphone and a big room and a lot of echo.
AI is everywhere. In your home. In your office. In your homie’s home office.
03:39
Lois: I feel like I just watched the intro of a sci-fi movie.
Jeff: Yeah. I’m not sure it’s one I’d watch, but I think more importantly it’s a good way to get into discussing some of the overarching things we need to know about AI and Oracle’s approach before we dive into the specific features, so you know, those features will make more sense when we get there?
03:59
Nikita: What are these “overarching” things?
Jeff: Well, the things we work on anytime we’re touching AI at Oracle. So, you know, it starts with things like Bias and Fairness. We usually end up in a pretty great conversation about things like how we avoid bias on the front end by making sure we don’t ingest things like bias-generating content, which is to say data that doesn’t necessarily represent bias by itself, but could be misused. And that pretty naturally leads us into a talk about guardrails.
Nikita: Guardrails?
Jeff: Yeah, you can think of those as checkpoints. So, we’ve got rules about ingestion and bias. And if we check the output coming out of the LLM to ensure it complied with the bias and fairness rules, that’s a guardrail. So, we do that. And we do it again on the apps side. And so that’s to say, even though it’s already been checked on the AI side, before we bring the output into the HCM app, it’s checked again. So another guardrail.
04:58
Lois: How effective is that? The guardrails, and not taking in data that’s flagged as bias-generating?
Jeff: Well, I’ll say this: It’s both surprisingly good, and also nowhere near good enough.
Lois: Ok, that’s as clear as mud. You want to elaborate on that?
Jeff: Haha! I think all it means is that approach does a great job, but our second point in the whole “standards” discussion is about the significance of having a human in the loop. Sometimes more than one, but the point here is that, particularly in HCM, where we’re handling some really important and sensitive data, and we’re introducing really powerful technology, the H in HCM gets even more important. So, throughout the HCM AI course, we talk about opportunities to have a human in the loop. And it’s not just for reviewing things. It’s about having the AI make suggestions, and not decisions, for example. And that’s something we always have a human in the loop for all the time. In fact, when I started teaching AI for HCM, I always said that I like to think of it is as a great big brain, without any hands.
06:00
Nikita: So, we’re not talking about replacing humans in HCM with AI.
Jeff: No, but we’re definitely talking about changing what the humans do and why it’s more important than ever what the humans do.
So, think of it this way, we can have our embedded AI generate this amazing content, or create really useful predictions, whatever it is that we need. We can use whatever tools we want to get there, but we can still expect people to ask us, “Where did that come from?” or “Does this account for [whatever]?”. So we still have to be able to answer that. So that’s another thing we talk about as kind of an overarching important concept: Explainability and Transparency.
06:41
Nikita: I’m assuming that’s the part about showing our work, right? Explaining what's being considered, how it's being processed, and what it is that you're getting back.
Jeff: That’s exactly it. So we like to have that discussion up front, even before we get to things like Gen and Non-Gen AI, because it’s great context to have in mind when you start thinking about the technology. Whenever we’re looking at the tech or the features, we’re always thinking about whether people are appropriately involved, and whether people can understand the AI product as well as they need to.
07:11
Lois: You mentioned Gen and Non-Gen AI. I’ve also heard people use the term “Classic AI.” And lately, a lot more about RAG and Agents. When you're teaching the course, does everybody manage to keep all the terminology straight?
Jeff: Yeah, people usually do a great job with this. I think the trick is, you have to know that you need to know it, if that makes sense.
Lois: I think so, but why don’t you spell it out for us.
Jeff: Well, the temptation is sometimes to leave that stuff to the implementers or product developers, who we know need to have a deep understanding of all of that. But I think what we’ve learned is, especially because of all the functional implications, practitioners, product owners, everybody needs to know it too. If for no other reason so they can have more productive conversations with their implementers.
You need to know that Classic or Non-Generative AI leverages machine learning, and that that’s all you need in order to do some incredibly powerful things like predictions and matching.
So in HCM, we’re talking about things like predicting time to hire, identifying suggested candidates for job openings, finding candidates similar to ones you already like, suggesting career paths for employees, and finding recommended successors. All really powerful matching stuff. And all of that stuff uses machine learning and it’s certainly AI, but none of that uses Generative AI to do that because it doesn’t need to.
08:38
Nikita: So how does that fit in with all the hype we’ve been hearing for a long time now about Gen AI and how it’s such a transformative technology that’s going to be more impactful than anything else?
Jeff: Yeah, and that can be true too. And this is what we really lean into when we do the AI in HCM course live. It’s much more of a “right AI for the right job” kind of proposition.
Lois: So, just like you wouldn’t use a shovel to mix a cake. Use the right tool for the job. I think I’ve got it. So, the Classic AI is what’s driving those kinds of features in HCM? The matching and recommendations?
Jeff: Exactly right. And where we need generative content, that’s where we add on the large language model capability. With LLMs, we get the ability to do natural language processing. So it makes sense that that’s the technology we’d use for tasks like “write me a job description” or “write me performance development tips for my employee”.
09:33
Nikita: Ok, so how does that fit in with what Lois was asking about RAG and Agents? Is that something people care about, or need to?
Jeff: I think it’s easiest to think about those as the “what’s next” pieces, at least as it relates to the embedded AI. They kind of deal with the inherent limitations of Gen and Non-Gen components. So, RAG, for example - I know you guys know, but your listeners might not...so what’s RAG stand for?
Lois & Nikita: Retrieval. Augmented. Generation.
Jeff: Hahaha! Exactly. Obviously. But I think everything an HCM person needs to know about that is in the name. So for me, it’s easiest to read that one backwards. Retrieval Augmented Generation. Well, the Generation just means it’s more generative AI. Augmented means it’s supplementing the existing AI. And Retrieval just tells you that that’s how it’s doing it. It’s going out and fetching something it didn’t already have in order to complete the operation.
10:31
Lois: And this helps with those limitations you mentioned?
Nikita: Yeah, and what are they anyway?
Jeff: I think an example most people are familiar with is that large language models are trained on this huge set of information. To a certain point. So that model is trained right up to the point where it stopped getting trained. So if you’re talking about interacting with ChatGPT, as an example, it’ll blow your doors off right up until you get to about October of 2023 and then, it just hasn’t been trained on things after that. So, if you wanted to have a conversation about something that happened after that, it would need to go out and retrieve the information that it needed.
For us in HCM, what that means is taking the large language model that you get with Oracle, and using retrieval to augment the AI generation for the things that the large language model wouldn’t have had.
11:22
Nikita: So, things that happened after the model was trained? Company-specific data? What kind of augmenting are you talking about?
Jeff: It’s all of that. All those things happen and it’s anything that might be useful, but it’s outside the LLM’s existing scope. So, let’s do an example. Let’s say you and Lois are in the market to hire someone. You’re looking for a Junior Podcast Assistant. We’d like the AI in HCM to help, and in order to do that, it would be great if it could not just generate a generic job description for the posting, but it could really make it specific to Oracle. Even better, to Oracle University.
So, you’d need the AI to know a few more things in order to make that happen. If it knows the job level, and the department, and the organization—already the job posting description gets a lot better. So what other things do you think it might need to know?
12:13
Lois: Umm I’m thinking…does it need to account for our previous hiring decisions? Can it inform that at all?
Jeff: Yes! That’s actually a key one. If the AI is aware not only of all the vacancies and all of the transactional stuff that goes along with it (like you know who posted it, what’s its metadata, what business group it was in, and all that stuff)...but it also knows who we hired, that’s huge.
So if we put all that together, we can start doing the really cool stuff—like suggesting candidates based not only on their apparent match on skills and qualifications, but also based on folks that we’ve hired for similar positions. We know how long it took to make those hires from requisition open to the employee’s first start date. So we can also do things like predicting time to hire for each vacancy we have with a lot more accuracy. So now all of a sudden, we’re not just doing recruiting, but we have a system that accounts for “how we do it around here,” if that makes any sense.
But the point is, it’s the augmented data, it’s that kind of training that we do throughout ingestion, going out to other sources for newer or better information, whatever it is we need. The ability to include it alongside everything that’s already in the LLM, that’s a huge deal.
13:31
Nikita: Ok, so I think the only one we didn’t get to was Agents.
Jeff: Yeah, so this one is maybe a little less relevant in HCM—for now anyway. But it’s something to keep an eye on. Because remember earlier when I described our AI as having a great big brain but no hands?
Lois: Yeah...
Jeff: Well, agents are a way of giving it hands. At least for a very well-defined, limited set of purposes. So routine and repetitive tasks. And for obvious reasons, in the HCM space, that causes some concerns. You don’t want, for example, your AI moving people forward in the recruiting process or changing their status to “not considered” all by itself. So going forward, this is going to be a balancing act. When we ask the same thing of the AI over and over again, there comes a point where it makes sense to kind of “save” that ask. When, for example, we get the “compare a candidate profile to a job vacancy” results and we got it working just right, we can create an agent. And just that one AI call that specializes in getting that analysis right. It does the analysis, it hands it back to the LLM, and when the human has had what they need to make sure they get what they need to make a decision out of it, you’ve got automation on one hand and human hands on the other...hand.
14:56
Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like large language models, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.com to find out more.
15:26
Nikita: Welcome back! Jeff, you’ve mentioned the “Time to Hire” feature a few times? Is that a favorite with people who take your classes?
Jeff: The recruiting folks definitely seem to enjoy it, but I think it’s just a great example for a couple of reasons. First, it’s really powerful non-generative AI. So it helps emphasize the point around the right AI for the right job. And if we’re talking about things in chronological order, it’s something that shows up really early in the hire-to-retire cycle.
And, you know, just between us learning nerds, I like to use Time to Hire as an early example because it gets folks in the habit of working through some use cases. You don’t really know if a feature is going to get you what you need until you’ve done some of that.
So, for example, if I tell you that Time to Hire produces an estimated number of days to your first hire. And you’re still Lois, and you’re still Niki, and you’re hiring for a Junior Podcast Assistant. So why do you care about time to hire? And I’m asking you for real—What would you do with that prediction if you had it?
16:29
Nikita: I guess I’d know how long it is before I can expect help to arrive, and I could plan my work accordingly.
Jeff: Absolutely. What else. What could you do with a prediction for Time to Hire?
Lois: Think about coverage?
Jeff: Yeah! Exactly the word I was looking for. Say more about that.
Lois: Well, if I know it’s gonna be three months before our new assistant starts, I might be able to plan for some temporary coverage for that work. But if I had a prediction that said it’s only going to be two weeks before a new hire could start, it probably wouldn’t be worth arranging temporary coverage.
Niki can hold things down for a couple of weeks.
Jeff: See, I’m positive she could! That’s absolutely perfect! And I think that’s all you really need to have in terms of prerequisites to understand any of the AI features in HCM. When you know what you might want to do with it, like predicting the need for temp cover, and you’ve got everything we talked about in the foundation part of the course—the Gen and the Classic, all that stuff, you can look at a feature like Time to Hire and then you can probably pick that up in 30 seconds.
17:29
Nikita: Can we try it?
Jeff: Sure! I mean, you know, we’re not looking at screens for this conversation, but we can absolutely try it. You’re a recruiter. If I tell you that Time to Hire is a feature that you run into on the job requisition and it shows you just a few editable fields, and then of course, the prediction of the number of days to hire—tell me how you think that feature is going to work when you get there.
Lois: So, what are the fields? And does it matter?
Jeff: Probably not really, but of course you can ask. So, let me tell you. Ready? The fields—they are these. Requisition Title, Location, and Education Level.
Nikita: Ok, well, I have to assume that as I change those things… like from a Junior Podcast Assistant to a Senior Podcast Assistant, or change the location from Redwood Shores to Detroit, or change the required education, the time to hire is going to change, right?
Jeff: 100%, exactly. And it does it in real time as you make those changes to those values. So when you pick a new location, you immediately get a new number of days, so it really is a useful tool.
But how does it work? Well, we know it’s using a few fields from the job requisition, but that’s not enough. Besides those fields, what else would you need in order to make this prediction work?
18:43
Lois: The part where it translates to a number of days. So, this is based on our historic hiring data? How long it took us to hire a podcast assistant the last time?
Jeff: Yep! And now you have everything you need. We call that “historic data from our company” bit “ingestion,” by the way. And there’s always a really interesting discussion around that when it comes up in the course. But it’s the process we use to bring in the HCM data to the AI so it can be considered or predictions exactly like this.
Lois: So it’s the HCM data making the AI smarter and more powerful.
Nikita: And tailored.
Jeff: Exactly, it’s all of that. And obviously, the HCM is better because we’ve given it the AI. But the AI is also better because it has the HCM in it.
But look, I was able to give you a quick description of Time to Hire, and you were able to tell me what it does, which data it uses, and how it works in just a few seconds.
So, that’s kind of the goal when we teach this stuff. It’s getting everybody ready to be productive from moment #1 because what is it and how does it work stuff is already out of the way, you know?
19:52
Lois: I do know!
Nikita: Can we try it with another one?
Jeff: Sure! How about we do...Suggested Candidates.
Lois: And you’re going to tell us what we get on the screen, and we have to tell you how it works, right?
Jeff: Yeah, yeah, exactly. Ok—Suggested Candidates. You’re a recruiter or a hiring manager. You guys are still looking for your Junior Podcast Assistant. On the requisition, you’ve got a section called Suggested Candidates. And you see the candidate’s name and some scores.
Those scores are for profile match, skills match, experience match. And there’s also an overall match score, and the highest rated people you notice are sorted to the top of the list. So, you with me so far?
Lois: Yes!
Jeff: So you already know that it’s suggesting candidates. But if you care about explainability and transparency like we talked about at the start, then you also care about where these suggested candidates came from. So let’s see if we can make progress against that. Let’s think about those match scores. What would you need in order to come up with match scores like that?
20:54
Nikita: Tell me if I’m oversimplifying this, but everything about the job on the requisition, and everything about the candidate? Their skills and experience?
Jeff: Yeah, that’s actually simplified pretty perfectly. So in HCM, the candidate profile has their skills and experience, and the req profile has the req requirements.
Lois: So we’re comparing the elements of the job profile and the person/candidate profile. And they’re weighted, I assume?
Jeff: That’s exactly how it works. See, 30 seconds and you guys are nailing these! In fairness, when we discuss these things in the course, we go into more detail. And I think it’s helpful for HCM practitioners to know which data from the person and the job profiles is being considered (and sometimes just as important, which is not being considered). And don’t forget we’re also considering our ingested data. Our previously selected candidates.
21:45
Lois: Jeff, can I change the weighting? If I care more about skills than experience or education, can I adjust the weighting and have it re-sort the candidates?
Jeff: Super important question. So let me give you the answer first, which is “no.” But because it’s important, I want to tell you more. This is a discussion we have in the class around Oracle’s Embedded vs. Custom AI. And they’re both really important offerings. With Embedded, what we’re talking about are the features that come in HCM like any other feature.
They might have some enablement steps like profile options, and there’s an activation panel. But essentially, that’s it. There’s no inspection panel for you to open up and start sticking your screwdriver in there and making changes. Believe it or not, that’s a big advantage with Embedded AI, if you ask me anyway.
Nikita: It’s an advantage to not be able to configure it?
Jeff: In this context, I think you can say that it is. You know, we talk about the advantages about the baked-in, Embedded AI in this course, but one of the key things is that it’s pre-built and pre-tested. And the big one: that it’s ready to use on day one. But one little change in a prompt can have a pretty big butterfly effect across all of your results. So, Oracle provides the Embedded AI because we know it works because we’ve already tested it, and it’s, therefore, ready on day one.
And I think that story maybe changes a little bit when you open up the inspection panel and bust out that screwdriver. Now you’re signing up to be a test pilot. And that’s just fundamentally different than “pre-built and ready on day one.” Not that it’s bad to want configuration.
23:24
Lois: That’s what the Custom AI path and OCI are about though, right? For when customers have hyper-specific needs outside of Oracle’s business processes within the apps, or for when that kind of tuning is really required. And your AI for HCM course—that focuses on the Embedded AI instead of Custom, yes?
Jeff: That is exactly it, yes.
Nikita: You said there are about 30 of these AI features across HCM. So, when you teach the course, do you go through all of them or are there favorites? Ones that people want to spend more time on so you focus on those?
Jeff: The professional part of me wants to tell you that we do try to cover all of them, because that explainability and transparency business we talked about at the beginning. That’s for real, so I want our customers to have that for the whole scope.
24:12
Nikita: The professional part? What’s the other part?
Jeff: I guess that’s the part that says sure, we need to hit all of them. But some of them are just inherently more fun to work on. So, it’s usually the learners who drive that in the live classes when they get into something, that’s where we spend the most time. So, I have my favorites too. The learners have their favorites. And we spend time where it’s everybody’s favorite.
Lois: Like where?
Jeff: Ok, so one is far from the most complex one, but I think it’s really elegant in its simplicity. And it’s the Celebrate feature, where we do employee recognition. There’s an AI Assist available there. So when it’s time to recognize a colleague, you just need to enter the headline or the title, and the AI takes it from there and just writes up the recognition.
24:56
Lois: What about that makes it a good example, Jeff? You said it’s elegant. What do you mean?
Jeff: I think it’s a few things. So, start with the prompt. It’s just the one line—just the headline. And that’s your one input. So, type in the headline, get the recognition below. It’s a great demonstration of not just the simplicity, but the power we get out of that simplicity. I always ask it to recognize my employees for implementing AI features in Oracle HCM, just to see what it comes up with.
When it tells the employee that they’re helping the company by automating routine tasks, bringing efficiency to the HR department, and then launches into specific examples of how AI features help in HCM, it really is pretty incredible. So, it’s a simple demo, but it explains a lot about how the Gen AI works.
Lois: That’s really cool.
25:45
Nikita: So this one is generative AI. It’s using the large language model to create the recognition based on the prompt, which is basically just whatever you entered in the headline. But how does that help explain how Gen AI works in HCM?
Jeff: Well, let’s take our simple prompt for example. There’s a lot happening behind the scenes. It’s taking our prompt, it’s doing its LLM thing, but before it’s done, it’s creating the results in a very specific way. An employee recognition reads really differently than a job description. So, I usually describe this as the hidden part of our prompt. The visible part is what we typed. But it needs to know things like our desired output format. Make sure to use the person’s name, summarize the benefits, and be sure to thank them for their contribution, that kind of stuff. So, those things are essentially hard-coded into the page. And that’s to say, this is another area where we don’t get an inspection panel that lets us go in and tweak the prompt.
26:42
Nikita: And that’s generally how generative AI works?
Jeff: Pretty much. Wherever you see an AI Assist button in HCM, that’s more or less what’s going on. And so when you get to some of the other more complex features, it’s helpful to know that that is what’s going on.
Lois: Like where?
Jeff: Well, it works that way for the About Me part of your employee profile, for goal creation in performance, and I think a really great example is in performance, where managers are providing the competency development tips.
So the prompt there is a little more complex there because it involves the employee’s proficiency rating instead of free text. But still, pretty straightforward. You’re gonna click AI Assist and it’s gonna generate all the development tips for any specific competency listed for that employee. Good development tips. Five of them. Nicely formatted with bullet points. And these aren’t random words assembled by an AI. So they conform to best practices in the development of competencies. So, something is telling the LLM to give us results that are that good, in that particular way.
So, it’s just another good example of the work AI is doing while protected behind the inspection panel that doesn’t exist. So, the coding of that page, in combination with what the LLM generates and the agent that it uses, is what produces the result. That’s generally the approach. In the class, we always have a good time digging into what must be going on behind that inspection panel. Generally speaking, the better feel we have for what’s going on on these pages, the better we’re able to get the results we want, even without having that screwdriver out.
28:21
Nikita: So it’s time well-spent, looking at all the individual features?
Jeff: I think so, especially if you’re anticipating really using any of them. So, the good news is, once you learn a few of them and how they work, and what they’re best at, you stop being surprised after a while. But there are always tips and tricks. And like we talked about at the top, explainability and transparency are absolutely key. So, as much as I’m not a fan of the phrase, I do think this is kind of a “knowledge is power” kind of situation.
28:51
Nikita: Sadly, we’re just about out of time for this episode.
Lois: That’s too bad, I was really enjoying this. Jeff, you were just talking about knowledge—where can we get more?
Jeff: Well, like you mentioned at the start, check out the AI in HCM course on MyLearn. It’s about an hour and a half, but it really is time well spent. And we get into detail on everything the three of us discussed here today, and then we have demoscussions of every feature where we show them and how they work and which data they’re using and a whole bunch more. So, there’s that. Plus, I hear the instructor is excellent.
Lois: I can vouch for that!
Jeff: Well, then you should definitely look into Dynamic Skills. Different instructor. But we have another course, and again I think about an hour and a half, but when you’re done with the AI course, I always feel like Dynamic Skills is where you really wanna go next to really flesh out all the Talent Management ideas that got stirred up while you were having a great time in the AI course.
And then finally, the live classes. It’s always really fun to take live questions while we talk about AI in HCM.
29:54
Nikita: Thanks, Jeff! This has been really interesting.
Lois: Yeah, thanks for being here, Jeff. We’ve loved having you on.
Jeff: Thank you guys so much for having me. It’s been a pleasure.
Lois: If you want to learn more about what we discussed, go to the show notes for today’s episode. You’ll find links to the AI for Human Capital Management and Dynamic Skills courses that Jeff mentioned so you can check them out. You can also head over to mylearn.oracle.com to find the live sessions for MyLearn subscribers that Jeff conducts.
Nikita: Join us next week as we kick off our “Best of 2024” season, where we’ll be revisiting some of our most popular episodes of the year. Until then, this is Nikita Abraham…
Lois: And Lois Houston, signing off!
30:35
That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
93 에피소드
모든 에피소드
×플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.