It’s the very first episode of The Big Pitch with Jimmy Carr and our first guest is Phil Wang! And Phil’s subgenre is…This Place is Evil. We’re talking psychological torture, we’re talking gory death scenes, we’re talking Lorraine Kelly?! The Big Pitch with Jimmy Carr is a brand new comedy podcast where each week a different celebrity guest pitches an idea for a film based on one of the SUPER niche sub-genres on Netflix. From ‘Steamy Crime Movies from the 1970s’ to ‘Australian Dysfunctional Family Comedies Starring A Strong Female Lead’, our celebrity guests will pitch their wacky plot, their dream cast, the marketing stunts, and everything in between. By the end of every episode, Jimmy Carr, Comedian by night / “Netflix Executive” by day, will decide whether the pitch is greenlit or condemned to development hell! Listen on all podcast platforms and watch on the Netflix Is A Joke YouTube Channel . The Big Pitch is a co-production by Netflix and BBC Studios Audio. Jimmy Carr is an award-winning stand-up comedian and writer, touring his brand-new show JIMMY CARR: LAUGHS FUNNY throughout the USA from May to November this year, as well as across the UK and Europe, before hitting Australia and New Zealand in early 2026. All info and tickets for the tour are available at JIMMYCARR.COM Production Coordinator: Becky Carewe-Jeffries Production Manager: Mabel Finnegan-Wright Editor: Stuart Reid Producer: Pete Strauss Executive Producer: Richard Morris Executive Producers for Netflix: Kathryn Huyghue, Erica Brady, and David Markowitz Set Design: Helen Coyston Studios: Tower Bridge Studios Make Up: Samantha Coughlan Cameras: Daniel Spencer Sound: Charlie Emery Branding: Tim Lane Photography: James Hole…
Privacy in Practice, brought to you by VeraSafe, is the podcast for actionable insights and real-world strategies for privacy and compliance teams. Hosted by privacy pros Kellie Du Preez and Danie Strachan, each episode unpacks the practical side of compliance and data management, bringing together industry leaders and thought-provoking discussions. Whether you’re leading privacy efforts at your company or just beginning to explore this field, tune in for meaningful conversations that provide a straightforward approach to data privacy and empower listeners to make informed, confident decisions. Privacy isn’t just about regulatory boxes—it’s about fostering trust and resilience in a digital world.
Privacy in Practice, brought to you by VeraSafe, is the podcast for actionable insights and real-world strategies for privacy and compliance teams. Hosted by privacy pros Kellie Du Preez and Danie Strachan, each episode unpacks the practical side of compliance and data management, bringing together industry leaders and thought-provoking discussions. Whether you’re leading privacy efforts at your company or just beginning to explore this field, tune in for meaningful conversations that provide a straightforward approach to data privacy and empower listeners to make informed, confident decisions. Privacy isn’t just about regulatory boxes—it’s about fostering trust and resilience in a digital world.
The future of AI is now, but how can you ensure it’s used responsibly while also driving business growth? In this episode of Privacy in Practice , Shane Witnov, AI Policy Director at Meta, provides a behind-the-scenes look at how the company navigates the complex intersection of AI innovation and privacy. Shane reveals how Meta uses its proven privacy frameworks to govern AI at scale and stay ahead of emerging regulations, offering a blueprint that businesses of all sizes can follow. This episode shows that AI governance doesn’t have to be an obstacle; instead, it can be your next strategic advantage. What You'll Learn: Why AI governance doesn’t have to mean starting from scratch. Why AI governance can (and should) build on your existing privacy, security, and data use frameworks. How to use proven privacy frameworks to govern AI safely. Why open-source AI models offer a better privacy solution. How to set clear, actionable guidelines for safe AI use without banning existing tools. Why staying ahead of state-level AI bills is crucial for protecting your business. How to identify AI risks early with red-teaming and practical testing. Why transparency isn’t just about labels. How to build trust through real-world impact. And so much more! Shane Witnov serves as AI Policy Director at Meta, where he focuses on the intersection of technology, privacy, and public policy, particularly in artificial intelligence. With a background in digital civil liberties and privacy law, he has been instrumental in guiding Meta's approach to AI governance and ethical implementation since joining the company in 2015. His expertise spans privacy compliance, AI ethics, and technological innovation, having previously worked with organizations like the Electronic Frontier Foundation and gaining valuable insights through his experience in law and technology. Connect with Shane Witnov here: LinkedIn Connect with Kellie du Preez here: LinkedIn Connect with Danie Strachan here: LinkedIn Follow VeraSafe here: LinkedIn If you enjoyed this episode, make sure to subscribe, rate, and review it. Episode Highlights: [00:06:43] Convergence Over Compliance: Building AI Governance That Scales Effective AI governance is about more than simply meeting regulatory requirements. Shane explains how Meta's "Convergence-based" approach helps create scalable, user-focused privacy solutions. By prioritizing features based on the value they offer to users globally, rather than tailoring to niche or less-used legal requirements, businesses can build systems that serve both compliance needs and real user benefits. Shane highlights the internal question at Meta: “Are we building a toggle for 5 users or 20% of users?” This distinction is critical in determining whether a control should be globally prioritized or tailored for specific jurisdictions. The takeaway for privacy professionals is clear: don’t waste resources on solutions no one uses; instead, build solutions that provide value now and set your business up for future regulatory developments. [00:15:45] Why AI Isn’t an Exception: Use the Frameworks You Already Have Shane cautions against AI exceptionalism, the idea that AI requires entirely new governance structures. Instead, start with existing privacy and risk frameworks, and then layer in AI-specific considerations like robustness, reliability, and appropriate use. He stresses that Meta used its well-established privacy risk processes as the foundation for AI model evaluations and red teaming. This approach, which builds on years of work, offers privacy and compliance teams a practical and cost-effective way to start governing AI while evolving as new risks emerge. The message is clear: don't start from scratch, evolve your existing frameworks to meet the needs of emerging technologies. [00:26:54] Bans Don’t Work, Clear Guidance Does Many businesses fear AI's potential risks and react by banning tools like ChatGPT outright. Shane warns that this is a mistake. "If you don’t give guidance, your employees are probably using it anyway," he points out. Rather than banning tools outright, organizations should focus on providing clear, actionable guidelines for acceptable uses. For example, encouraging employees to use AI for internal tasks like summarizing meeting notes or drafting emails is acceptable, while uploading customer data to these platforms is not. This approach empowers employees to use AI safely and responsibly, without stifling productivity or innovation. Whether you’re a privacy officer or a business leader, this segment provides a roadmap for creating clear boundaries and ensuring safe AI use. [00:34:43] How to Start AI Governance with No Budget and No Team No team? No budget? No problem! Shane offers a simple, three-step process for small businesses or startups to start implementing AI governance: Assign Someone to Oversee AI Use: This doesn’t need to be a full-time role, just someone who can monitor AI developments and risks. Run Low-Risk Pilot Programs: Start with non-critical workflows that can benefit from AI, and gradually scale up as you gather insights. Test with a Red-Team Mindset: Identify vulnerabilities and risks early on by testing AI tools before fully implementing them. By following these steps, businesses can take meaningful action without needing large teams or massive budgets. Shane emphasizes that AI governance is about being iterative and thoughtful rather than perfect, which is especially important for smaller organizations working with limited resources. [00:38:34] Transparency Isn’t Just Labels: It’s Context That Matters Shane explains how transparency around AI usage is evolving. While labeling AI-generated content is one approach, it often doesn’t align with user concerns. For example, Meta’s attempt to label AI-edited images using metadata standards (like those from Photoshop) led to confusion and frustration among users, who didn’t care about the technical aspects of AI use, they just didn’t want to be misled. This highlights an important lesson for privacy leaders: transparency isn’t about disclosing every instance of AI use; it’s about providing meaningful context that aligns with users' expectations. By focusing on user impact rather than technical disclosures, organizations can build trust and ensure that transparency efforts are both meaningful and effective. [00:21:00] From Focus Groups to Global Consensus: Listening as a Governance Tool How do you know if your AI tools align with user values? Ask them. Shane explains how Meta uses a variety of methods, including global focus groups, UX research, and deliberative democracy forums, to gather input from real users about how AI should be governed. These forums, which bring together ordinary users after structured education on ethical dilemmas, often reveal surprising alignment. For example, when presented with challenging questions, 70% of participants reached consensus on issues that initially seemed divisive. The key takeaway for privacy professionals is clear: building real-world input into your governance framework can help ensure that AI tools align with the needs and values of the people who use them. Episode Resources: Shane Witnov on LinkedIn VeraSafe Website Kellie du Preez on LinkedIn Danie Strachan on LinkedIn Connect with us at podcast@verasafe.com This podcast is brought to you by VeraSafe .…
Privacy and marketing can (and should) work together—and the most forward-thinking businesses are proving it.. In this episode of Privacy in Practice , hosts Kellie du Preez and Danie Strachan welcome Dr. Sachiko Scheuing, European Privacy and AI Governance Officer at Acxiom. As co-chair of the Federation of European Data and Marketing, Sachiko shares her front-line perspective on how responsible data use, smart marketing, and privacy compliance can fuel both trust and growth. What You'll Learn: How digital advertising empowers small and medium sized businesses The three essential categories of AdTech: SEO, Walled Gardens, and Open Internet Why PETs and data minimization are key to responsible data use How to build a privacy-first culture that drives business success Why DPOs are perfectly positioned to lead AI governance Practical strategies for data minimization using pseudonymization and anonymization Sachiko’s "Inform, Involve, Initiate" framework to improve privacy practices And so much more! Dr. Sachiko Scheuing serves as the European Privacy and AI Governance Officer at Acxiom, where she leads privacy and compliance initiatives across Europe. With over 20 years of experience in marketing, technology, and compliance, she has established herself as a leader in privacy, AI governance, and data protection. As co-chair of the Federation of European Data and Marketing, she is pivotal in shaping data protection and marketing policies across Europe. She is also the author of "How to Use Customer Data: Navigating GDPR, DPDI and a Future with Marketing AI," a comprehensive guide on marketing compliance under the GDPR. Her expertise in balancing privacy requirements with innovative marketing strategies, combined with her advocacy for responsible data practices and AI governance, makes her a respected voice in the privacy and data protection community. Episode Highlights: [00:09:08] Why Digital Advertising Isn’t Evil Digital advertising gets a bad rap, but Sachiko explains why it’s time to reframe the conversation. She challenges the common perception of digital advertising as inherently problematic by highlighting its role in democratizing marketing opportunities. Before the rise of digital ads, only big players like Coca-Cola or IKEA could afford mass-market campaigns. Today, targeted advertising makes it possible for local businesses to reach relevant audiences affordably and effectively. This democratization empowers small and medium-sized enterprises—the backbone of most economies—while helping fund the mostly free internet we all rely on today. The takeaway: privacy professionals can help organizations strike a balance—leveraging ethical digital advertising to support both business growth and equitable access to information, all while maintaining compliance and trust. [00:15:34] Privacy Culture Starts at the Top A privacy policy alone won’t build trust—genuine privacy compliance requires a cultural shift, and that shift starts at the top. Sachiko shares how executive commitment to privacy has been essential at Acxiom, where leadership views trust as a business advantage. When executives visibly prioritize privacy, they send a clear message that trust, transparency, and data ethics are fundamental to the organization’s values. But it has to go beyond lip service. Leaders must take concrete actions—allocating resources to privacy programs, funding staff training, and embedding privacy into strategic decision-making. Even when the immediate return isn’t obvious, executives who champion privacy are investing in the company’s future viability and credibility. They empower privacy teams to drive meaningful change across the business, creating an environment where privacy becomes second nature The result? Privacy shifts from a regulatory obligation to a strategic differentiator—fueling customer loyalty, enhancing brand reputation, and ensuring sustainable success. And in an era of tightening regulations and rising consumer expectations, organizations that don’t make privacy leadership a priority risk falling behind. [00:33:37] Using PETs, Anonymization, and RoPA to Strengthen Privacy In a thoughtful exchange with Kellie and Danie, Sachiko discusses how privacy enhancing technologies (PETs)—and techniques like pseudonymization, anonymization, differential privacy, and synthetic data—are gaining traction as practical tools for reducing risk without compromising data utility. These approaches are especially valuable for organizations working with large data sets or building AI systems, offering ways to protect individuals while still enabling insight and innovation. The conversation then turns to Records of Processing Activities (RoPA) as a foundational step for understanding your data landscape. A key takeaway being that there’s no single correct structure—what works depends on the business. Whether organized by purpose, system, or data subject type, a well-thought-out ROPA helps teams identify where PETs can be most effective and where compliance risks may be hiding. For those new to these concepts, the group shares practical suggestions for building literacy—from free online trainings to industry conferences and Sachiko’s own book, How to Use Customer Data. Episode Resources: Sachiko Scheuing on LinkedIn How to Use Customer Data: Navigating GDPR, DPDI and a Future with Marketing AI VeraSafe Website Kellie du Preez on LinkedIn Danie Strachan on LinkedIn Connect with us at podcast@verasafe.com This podcast is brought to you by VeraSafe .…
How long can we tow the line on our thoughts being accessed by new technology? In the latest episode of Privacy in Practice, hosts Kellie du Preez and Danie Strachan welcome Kristen Mathews, Partner at Cooley's Cyber Data Privacy Practice Group to explore the evolving landscape of mental privacy—its challenges, opportunities, and the critical questions shaping its future. Together, they: Examine how businesses collect and use personal information beyond their core services Explore how emerging technologies can collect and interpret brain activity Investigate the unique challenges of protecting neural data and whether traditional privacy laws are enough Reflect on the role of AI in processing neural data and the intersection with emerging regulations like the EU AI Act Consider the potential for industry self-regulation in the neurotech space Highlight the positive applications of neurotech, from medical uses like seizure prediction to mental wellness applications Emphasize the importance of privacy-conscious implementation Kristen Mathews is a partner in Cooley's Cyber Data Privacy Practice Group and a pioneering voice in the emerging field of mental privacy and neurotech regulation. With a career spanning multiple decades in privacy law, she has been at the forefront of numerous privacy developments, including early data breach responses, online behavioral advertising implementation, and biometric data protection. Her current focus on mental privacy and neurotech regulation demonstrates her commitment to addressing tomorrow's privacy challenges today. Her practical approach to balancing innovation with privacy protection makes her a valuable voice for privacy professionals navigating emerging technologies and their associated regulatory challenges. Episode Highlights: [00:03:18] The Data Exchange Framework for Privacy Communication Privacy professionals should reframe data collection as a "data exchange" between businesses and consumers, where both parties receive clear value. This framework helps organizations clearly communicate what data they need to provide their service versus additional data they collect for other purposes. Companies should explicitly demonstrate the benefits users receive in exchange for their data, making the value proposition transparent. The approach requires privacy teams to work closely with product and marketing teams to articulate the exchange in user-friendly terms. This helps build trust and reduces the risk of users feeling "cheated" when they later discover unexpected data uses. [00:07:14] Effective Privacy Notice Design: Beyond the Legal Document Kristen emphasizes that privacy notices should be integrated into the user interface at the exact moment users need the information, not just buried in legal documents. Privacy professionals should ensure notices match the voice and tone of the service, using the same language style that resonates with users. The information should be presented concisely and prominently, avoiding overwhelming users with legal jargon. This approach helps build trust and transparency while reducing the likelihood of litigation and complaints. For maximum effectiveness, privacy teams should coordinate with UI/UX designers to create notices that appear at key decision points in the user journey. [00:28:29] Protecting Neural Data: A Layered Security Approach For organizations working with neural data, Kristen recommends implementing multiple layers of protection beyond standard privacy measures. Privacy teams should consider storing neural data locally on devices rather than in the cloud, implementing strong encryption that only allows individual device access, and carefully evaluating the effectiveness of de-identification methods. Organizations need to think about future-proofing their privacy protections, anticipating how advancing technology might affect data security. This approach helps protect sensitive neural data from breaches, unauthorized access, and potential subpoenas while maintaining functionality for legitimate uses. [00:31:10] Proactive Self-Regulation in Emerging Technologies Privacy professionals working with emerging technologies should consider implementing self-regulation before legislation mandates specific requirements. Drawing from the successful example of the ad tech industry, companies should develop privacy protection frameworks that align with their business models while protecting individual rights. Early self-regulation can help shape future legislation in practical ways that work for both businesses and consumers. This approach requires privacy teams to collaborate across industries to establish standards that address key concerns while maintaining innovation. Organizations that take the lead in self-regulation often have more influence over eventual regulatory requirements. Episode Resources: Kristen Mathews on LinkedIn VeraSafe Website Kellie du Preez on LinkedIn Danie Strachan on LinkedIn Connect with us at podcast@verasafe.com This podcast is brought to you by VeraSafe .…
What if you could build a more effective privacy program by bridging the gap between legal requirements and technical implementation? In this episode of Privacy in Practice, we sit down with Peter Jaffe, VP & Sr. Associate General Counsel for Privacy, Technology, Facilities & Operations at National Geographic Society. Together, we: - Explore Peter's unique journey into privacy law - Examine the critical intersection of technical knowledge and privacy law - Discuss essential technical concepts for privacy professionals - Delve into effective strategies for building privacy programs - Consider the role of privacy professionals as translators between stakeholders - Explore practical approaches to privacy training - Share valuable insights on managing global privacy compliance - And so much more! With over a decade of experience spanning both private practice and in-house roles, Peter brings a unique blend of technical acumen and legal expertise to privacy law, having evolved from his early career in financial services litigation to becoming a respected voice in privacy and data protection. His approach combines rigorous technical understanding with human-centered privacy principles, making him particularly effective at bridging the gap between legal requirements and practical implementation. Peter's experience in building privacy programs, managing data breaches, and navigating complex regulatory landscapes, coupled with his ability to translate technical concepts for diverse stakeholders, provides valuable insights for privacy professionals at all levels. His emphasis on understanding both the technical architecture and human elements of privacy makes him an especially relevant voice for organizations working to build sustainable privacy programs. Episode Highlights: [05:27] Technical Foundation for Privacy Professionals Peter emphasizes that privacy professionals need a basic understanding of technical concepts to be effective advisors. He recommends learning fundamentals of object-oriented programming, database structures, and access controls - even if informally through self-study. This technical knowledge helps professionals spot nuances that impact transparency requirements and data-handling practices. For privacy leaders developing their skills, starting with introductory programming concepts and database fundamentals provides a crucial foundation for understanding modern privacy challenges. Most importantly, this technical literacy enables better communication with IT teams and more practical implementation guidance. [11:36] Strategic Approach to Delivering Difficult Privacy News When delivering challenging privacy-related messages, Peter advocates for a methodical, analytical approach rather than emotional reactions. He recommends first identifying applicable laws and requirements, then systematically exploring options for compliance, and finally presenting a clear risk analysis across liability, litigation, and reputational dimensions. This structured approach helps maintain professional relationships while ensuring stakeholders understand the full context of privacy decisions. Privacy professionals can use this framework to transform potentially confrontational situations into collaborative problem-solving opportunities. The method also helps build credibility and trust with business partners who may be skeptical of privacy requirements. [18:40] Building Effective Privacy Programs: The "Why" Before "How" Peter stresses the importance of establishing the foundational "why" of privacy programs before diving into implementation. Privacy leaders should help organizations understand both risk factors (regulatory, litigation, reputation) and positive motivators (customer trust, contractual obligations). This foundation requires a deep understanding of organizational culture, risk tolerance, and stakeholder expectations. The approach should align with existing governance structures rather than imposing a one-size-fits-all solution. Most critically, success depends on finding internal allies with relevant technical skills who can help discover and manage privacy requirements effectively. [30:35] Targeted Privacy Training Strategy Rather than attempting to cover all privacy principles broadly, Peter recommends separating training content based on audience needs. For general staff, focus on helping them recognize personal information and understand when to ask for help rather than technical compliance details. This targeted approach improves retention and practical application of privacy concepts. Training should be customized to reflect the organization's specific context and use cases rather than relying solely on generic materials. The key measure of success is whether employees know how to identify privacy issues and engage appropriate resources when needed. Episode Resources: Peter Jaffe on LinkedIn Kellie du Preez on LinkedIn Danie Strachan on LinkedIn VeraSafe on LinkedIn Connect with us at podcast@verasafe.com This podcast is brought to you by VeraSafe .…
In this inaugural episode of Privacy in Practice, hosts Kellie du Preez and Danie Strachan introduce VeraSafe's new podcast focused on making privacy compliance practical and accessible. Together, they: Share their personal journeys into privacy law Explore why privacy compliance is both challenging and rewarding Discuss the importance of balancing theoretical compliance requirements with real-world business constraints Examine recent EDPB guidance on controller obligations Address the growing regulatory emphasis on understanding technical implementations and data flows Examine the latest challenges with the EU AI Act Emphasize the need for a holistic approach to privacy compliance And so much more! Kellie du Preez is a privacy compliance leader and former litigation attorney who transitioned from defending banks in Boston to focusing on global privacy compliance. With experience as both an IP litigator and privacy professional, she brings a unique perspective on balancing practical business needs with regulatory requirements. As a Data Protection Officer and privacy consultant at VeraSafe, Kellie helps organizations navigate complex privacy challenges with a focus on creating workable, cost-effective solutions. Danie Strachan is a privacy professional who began his career in South African legal practice, where he developed deep experience in data protection law during the implementation of South Africa's Protection of Personal Information Act (POPIA). As a senior privacy counsel at VeraSafe, he specializes in helping organizations understand and implement privacy requirements across multiple jurisdictions, including the EU. Danie brings valuable insight into the evolution of privacy regulations and practical approaches to compliance. Episode Highlights: [00:20:58] Understanding Your Data Processing Chain - Privacy professionals must take a more active role in understanding their complete data processing ecosystem. Recent EDPB guidance emphasizes that organizations can't simply delegate responsibility to processors - they need detailed knowledge of all subprocessors and their security measures. This includes knowing where data is hosted, what security measures are in place, and maintaining proper documentation of the entire processing chain. For DPOs and privacy leads, this means implementing robust vendor management processes, maintaining detailed data maps, and regularly reviewing subprocessor arrangements. This increased oversight requirement may require updating data processing agreements and implementing new monitoring systems. [00:36:28] Beyond Checkbox Compliance - Privacy compliance requires moving beyond surface-level documentation to meaningful implementation. Organizations often focus too heavily on having privacy notices and policies while neglecting the actual operational aspects of privacy compliance. Privacy professionals need to dive deep into understanding actual data flows, processing activities, and technical implementations. This includes regular audits of data collection practices, storage durations, and processing purposes. The key is connecting written policies to practical implementation through technical controls and operational procedures. [00:42:28] Preparing for the EU AI Act - With the February 2025 deadline here for prohibited AI systems, privacy professionals need to conduct comprehensive AI audits within their organizations. This includes identifying all AI systems in use, evaluating them against the EU AI Act's risk categories, and developing plans to address any systems that are prohibited. Privacy teams should focus particularly on workplace monitoring systems, automated decision-making tools, and any AI systems that could affect individual rights. Creating an AI inventory and risk assessment framework should be an immediate priority. [00:47:51] Managing Vendor AI Implementation - Privacy professionals must establish processes to evaluate AI capabilities being introduced through existing vendor relationships. Many vendors are rolling out AI features without explicit notification, creating compliance risks. Privacy teams should implement specific AI review procedures as part of vendor management, require vendors to provide detailed information about AI features, and establish clear internal protocols for when teams need to involve privacy review of new AI capabilities. This requires ongoing communication with business units and regular vendor technology reviews. Episode Resources: VeraSafe Website Kellie du Preez on LinkedIn Danie Strachan on LinkedIn Connect with us at podcast@verasafe.com This podcast is brought to you by VeraSafe .…
Privacy in Practice, brought to you by VeraSafe, is the podcast for actionable insights and real-world strategies for privacy and compliance teams. Hosted by privacy pros Kellie du Preez and Danie Strachan, each episode unpacks the practical side of compliance and data management, bringing together industry leaders and thought-provoking discussions. In every episode, listeners will discover actionable solutions for today's most pressing privacy challenges. From navigating privacy laws like the GDPR and U.S. state regulations to exploring the intersection of privacy with AI, cybersecurity, and emerging technologies, we cover the topics that matter most to modern privacy professionals. Our discussions feature insights from professionals to help you develop and support privacy programs that drive business growth rather than hinder it. Whether you’re leading privacy efforts at your company or just beginning to explore this field, tune in for meaningful conversations that provide a straightforward approach to data privacy and empower listeners to make informed, confident decisions. Privacy isn’t just about regulatory boxes—it’s about fostering trust and resilience in a digital world. Key Links VeraSafe Website Kellie du Preez on LinkedIn Danie Strachan on LinkedIn Connect with us at podcast@verasafe.com This podcast is brought to you by VeraSafe .…
플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.