Player FM - Internet Radio Done Right
40 subscribers
Checked 1+ y ago
추가했습니다 eleven 년 전
Scale Cast – A podcast about big data, distributed systems, and scalability에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Scale Cast – A podcast about big data, distributed systems, and scalability 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!
Player FM 앱으로 오프라인으로 전환하세요!
들어볼 가치가 있는 팟캐스트
스폰서 후원
W
Wits & Weights | Fat Loss, Nutrition, & Strength Training for Lifters


Get your free custom nutrition plan (exclusive to podcast listeners) when you join Wits & Weights Physique University for just $27/month -- There was a day when everything we thought we knew about losing weight got turned upside down. For decades, the advice was simple: eat less, move more. But then scientists started asking why weight loss always seemed to get harder over time, and what they discovered changed everything. This landmark Episode 350 explores the paradigm shift that revolutionized our understanding of metabolism, fat loss, and why 95% of people regain lost weight. Main Takeaways: Your body doesn't just passively lose weight, it actively fights back by slowing metabolism and increasing hunger within 2-3 weeks of dieting The shift from moral food judgments to flexible, data-driven nutrition revolutionized sustainable fat loss Three game-changing strategies emerged: macro tracking as a foundation, working with (not against) metabolic adaptation, and strength training as metabolic insurance This scientific revolution changed how we view our bodies, from broken machines needing punishment to intelligent adaptive systems responding to our lifestyle signals Episode Mentioned: Fat Loss vs. Weight Loss Timestamps: 0:01 - The day everything changed about weight loss 4:30 - Why the "eat less, move more" approach failed 8:47 - How your body fights back: the hormone cascade 10:47 - The rise of flexible dieting and evidence-based coaching 12:29 - Key people who changed the game 14:37 - From food restriction to food awareness 19:35 - 3 game-changing strategies from the research Support the show 🎓 Lose fat for good in Physique University, now just $27/mo ( tap here and I’ll create a FREE custom nutrition plan when you join ) 👥 Join our Facebook community for Q&As & support 👋 Ask a question or find me on Instagram 📱 Try MacroFactor 2 weeks free with code WITSANDWEIGHTS (my favorite nutrition app)…
Lecture 2: Cluster Computing and MapReduce
Manage episode 60658711 series 60629
Scale Cast – A podcast about big data, distributed systems, and scalability에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Scale Cast – A podcast about big data, distributed systems, and scalability 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Lecture 2: Cluster Computing and MapReduce
Scale Cast – A podcast about big data, distributed systems, and scalability
Manage episode 60658711 series 60629
Scale Cast – A podcast about big data, distributed systems, and scalability에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Scale Cast – A podcast about big data, distributed systems, and scalability 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
すべてのエピソード
×In 2006 we were building distributed applications that needed a master, aka coordinator, aka controller to manage the sub processes of the applications. It was a scenario that we had encountered before and something that we saw repeated over and over again inside and outside of Yahoo!. For example, we have an application that consists of a bunch of processes. Each process needs be aware of other processes in the system. The processes need to know how requests are partitioned among the processes. They need to be aware of configuration changes and failures. Generally an application specific central control process manages these needs, but generally these control programs are specific to applications and thus represent a recurring development cost for each distributed application. Because each control program is rewritten it doesn’t get the investment of development time to become truly robust, making it an unreliable single point of failure. link to podcast…
The Bloom filter, conceived by Burton H. Bloom in 1970, is a space-efficient probabilistic data structure that is used to test whether an element is a member of a set. False positives are possible, but false negatives are not. Elements can be added to the set, but not removed (though this can be addressed with a counting filter). The more elements that are added to the set, the larger the probability of false positives. For example, one might use a Bloom filter to do spell-checking in a space-efficient way. A Bloom filter to which a dictionary of correct words has been added will accept all words in the dictionary and reject almost all words which are not, which is good enough in some cases. Depending on the false positive rate, the resulting data structure can require as little as a byte per dictionary word. In the last few years Bloom filter become hot topic again and there were several modifications and improvements. In this talk I will present my last few improvements in this topic. Speaker: Ely Porat Ely Porat received his Doctorate from Bar-Ilan University in 2000. Following that, he fulfilled his military service and, in parallel, worked as a faculty member at Bar-Ilan University. Having spent the spring 2007 semester as a Visiting Scientist in Google, he is now back at Bar-Ilan University. The main body of Ely Porat’s work concerns matching problems: string matching, pattern matching, subset matching. He also worked on the nearest pair problem in high-dimensional spaces as well as sketching and edit distance. link…
In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile–time and run–time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run–time environment variability will make these problems much harder. Link to video…
This talk takes us on a journey through three varied, but interconnected topics. First, our research lab has engaged in a series of disk-based computations extending over five years. Disks have traditionally been used for filesystems, for virtual memory, and for databases. Disk-based computation opens up an important fourth use: an abstraction for multiple disks that allows parallel programs to treat them in a manner similar to RAM. The key observation is that 50 disks have approximately the same parallel bandwidth as a _single_ RAM subsystem. This leaves latency as the primary concern. A second key is the use of techniques like delayed duplicate detection to avoid latency link to video…
link to video
link to video
link to video
link to video
Lecture 1 in a five part series introducing mapreduce and cluster computing. See http://code.google.com/edu/… ; for slides and other resources. Link to video
플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.