Artwork

Jody Maberry에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Jody Maberry 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

An AI Agenda - Robots, Rules, and Really Big Questions

20:57
 
공유
 

Manage episode 483821648 series 3647477
Jody Maberry에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Jody Maberry 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

“We have to make sure AI doesn’t just automate what we've always done. It should elevate what’s possible.”

Notable Moments

00:40 – What’s pushing us to talk about AI now?

04:22 – A call for AI mission statements

08:18 – When tools lead before people: the risk of reactive adoption

11:05 – Defining AI boundaries: what it should never replace

15:33 – ChatGPT, Canva, Magic School: the tools already in use

18:42 – The importance of transparency and human oversight

22:55 – Reframing AI as “instructional support,” not just automation

AI isn’t something on the horizon. It’s already woven into our daily workflows, often in ways we barely notice. As Redox team members, we’re right in the thick of it, navigating both the promise and the risks that come with this powerful technology.

Our aim is to make AI practical, secure, and empowering across our organization. With insights from our security engineering team and guest Brent Ufkes, we focused on key strategies that work for us. When new AI tools crop up, curiosity comes first, but we never skip the important questions: Who’s using it? What kind of data is involved? How does it fit into our existing risk frameworks?

Our approach is audience-centered. We evaluate AI exactly as we would any other tool, by layering data classification and security reviews to make sure nothing sensitive, especially PHI, gets mishandled. Education sits at the core: regular updates in Slack, comprehensive living documents, and clear policies all aim to keep things transparent and flexible. Brent reminds us that all policies work together. AI doesn’t trump privacy or compliance, and training never ends.

We’re building a “culture of learning,” leaning on established security tools like DLP solutions and endpoint monitoring to keep things safe behind the scenes. AI tools are only as good as the context we provide and the prompts we write, and we’re always improving together.

The biggest takeaway? AI can give us a real edge if we put security, clarity, and cooperation first. At Redox, we don’t just adapt to change; we shape it, one secure workflow at a time.

Resources

Have feedback or a topic suggestion? Submit it using this linked form.

www.redoxengine.com

Past Podcast Episodes

https://redoxengine.com/solutions/platform-security

Matt Mock [email protected]

Meghan McLeod [email protected]

  continue reading

11 에피소드

Artwork
icon공유
 
Manage episode 483821648 series 3647477
Jody Maberry에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Jody Maberry 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

“We have to make sure AI doesn’t just automate what we've always done. It should elevate what’s possible.”

Notable Moments

00:40 – What’s pushing us to talk about AI now?

04:22 – A call for AI mission statements

08:18 – When tools lead before people: the risk of reactive adoption

11:05 – Defining AI boundaries: what it should never replace

15:33 – ChatGPT, Canva, Magic School: the tools already in use

18:42 – The importance of transparency and human oversight

22:55 – Reframing AI as “instructional support,” not just automation

AI isn’t something on the horizon. It’s already woven into our daily workflows, often in ways we barely notice. As Redox team members, we’re right in the thick of it, navigating both the promise and the risks that come with this powerful technology.

Our aim is to make AI practical, secure, and empowering across our organization. With insights from our security engineering team and guest Brent Ufkes, we focused on key strategies that work for us. When new AI tools crop up, curiosity comes first, but we never skip the important questions: Who’s using it? What kind of data is involved? How does it fit into our existing risk frameworks?

Our approach is audience-centered. We evaluate AI exactly as we would any other tool, by layering data classification and security reviews to make sure nothing sensitive, especially PHI, gets mishandled. Education sits at the core: regular updates in Slack, comprehensive living documents, and clear policies all aim to keep things transparent and flexible. Brent reminds us that all policies work together. AI doesn’t trump privacy or compliance, and training never ends.

We’re building a “culture of learning,” leaning on established security tools like DLP solutions and endpoint monitoring to keep things safe behind the scenes. AI tools are only as good as the context we provide and the prompts we write, and we’re always improving together.

The biggest takeaway? AI can give us a real edge if we put security, clarity, and cooperation first. At Redox, we don’t just adapt to change; we shape it, one secure workflow at a time.

Resources

Have feedback or a topic suggestion? Submit it using this linked form.

www.redoxengine.com

Past Podcast Episodes

https://redoxengine.com/solutions/platform-security

Matt Mock [email protected]

Meghan McLeod [email protected]

  continue reading

11 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생