Keeping you up to date with the latest trends and best performing architectures in this fast evolving field in computer science. Selecting papers by comparative results, citations and influence we educate you on the latest research. Consider supporting us on Patreon.com/PapersRead for feedback and ideas.
…
continue reading
A weekly show all about audiobooks recorded at the RNIB Talking Book studios. We talk to your favourite authors and narrators, along with reviews and news about new audiobooks. Presented and produced by Robert Kirkwood, you'll find a new episode here every Friday at 1pm plus bonus content such as longer uncut interviews and episodes of our occasional extra show, The Book Group. Talking Books is a free service from RNIB giving access to over 40,000 fiction and non fiction books for adults and ...
…
continue reading
The Internets Auditory Version Of Reddit
…
continue reading
Left On Read is a podcast where I'll be talking all my bookish thoughts. Let's hang 🖤 • (Yes, I did rebrand the cover art. I wanted something simpler and less busy.) • https://instagram.com/ashleyyyreads
…
continue reading
Tune in to our daily bible readings!
…
continue reading
Welcome to We Read It One Night, the bookish comedy podcast where sisters Alison and Rachel introduce you to the next romance novel that’ll make you want to stay up all night reading. Subscribe! Follow! Rate! Review! Tell your friends about us! Instagram: @wereaditonenight Twitter: @wereaditpodcast Facebook: We Read It One Night TikTok: @wereaditonenight Email: wereaditonenight [at] gmail.com
…
continue reading
Two best friends do a deep dive into a different series, looking at one book each month and discussing its plot, characters, and themes.
…
continue reading
It’s about life, but not just life. When you hear the voice of esteemed historian Finn Melanson you’ll be hooked.
…
continue reading
Two girls - one single, one not - discuss different dating points of view and try to debunk why we do what we do while dating in the 21st century.
…
continue reading
Chris and Emily discuss books and literary adventures
…
continue reading
On Read This One, we read all of our favorite children's books for you to follow along with before bed, in the car, or wherever! Its a perfect podcast for children learning to read. Have a favorite book you want us to read? Great! You can email us at readthisonepodcast@gmail.com to tell us what you want to hear. Let's discover new books together! Support this podcast: https://podcasters.spotify.com/pod/show/samuel289/support
…
continue reading
…
continue reading
1
Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations
48:19
48:19
나중에 재생
나중에 재생
리스트
좋아요
좋아요
48:19
Large-scale recommendation systems are characterized by their reliance on high cardinality, heterogeneous features and the need to handle tens of billions of user actions on a daily basis. Despite being trained on huge volume of data with thousands of features, most Deep Learning Recommendation Models (DLRMs) in industry fail to scale with compute.…
…
continue reading
1
Episode 206 - Author Spotlight with Rebecca Rego Barry
1:37:21
1:37:21
나중에 재생
나중에 재생
리스트
좋아요
좋아요
1:37:21
Welcome to Episode 206 where we have a fantastic conversation with Rebecca Rego Barry, author of THE VANISHING OF CAROLYN WELLS: Investigations into a Forgotten Mystery Author. One reviewer referred to Barry’s book as a “process biography.” It is true, Barry takes you along on her investigation into the life of Carolyn Wells who, it turns out, wrot…
…
continue reading
1
TV Tuesday Ep. 11- "I think you need a doctor." (Doctor Who Series 1)
1:06:30
1:06:30
나중에 재생
나중에 재생
리스트
좋아요
좋아요
1:06:30
It's TV Tuesday, so Hannah and Laura decided to hop in the Tardis and cover the first series of Doctor Who! They chat about the actors, favorite episodes, themes, and of course, the Daleks. ***This podcast episode contains SPOILERS for Doctor Who series 1.*** Media Mentions: Doctor Who series 1---Max The Lord of the Rings movies---Max Community---N…
…
continue reading
RED BUBBLE STORE: https://rdbl.co/2BXMEkq DISCORD: https://discord.com/invite/uWZkb2a 2:05 - Read It On Reddit 31:28 - Shower Thoughts 37:19 - Podnapping: Saturday Quiz Time AMA - readitpodcast@gmail.com - Ask Us Anything! LET YOUR GUARD DOWN!저자 Read It Podcasts
…
continue reading
In today's show we talk to Leeanne O'Donnell about spirit and matter, love and lust, and reality and magic, all in her book Sparks of Bright Matter. We also look forward to the Boswell Book Festival with Gordon Turnbull and find new books in the RNIB Library.저자 RNIB Connect Radio
…
continue reading
In this episode, I talk about all the books I hauled and read in March. Mistakes were made, and my bank account cried. Let's just get into it 🤦♀️
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
1
Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models
35:12
35:12
나중에 재생
나중에 재생
리스트
좋아요
좋아요
35:12
We study how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages. This underexplored problem poses new challenges at the pre-writing stage, including how to research the topic and prepare an outline prior to writing. We propose STORM, a writing system f…
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
1
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
37:55
37:55
나중에 재생
나중에 재생
리스트
좋아요
좋아요
37:55
In this work, we introduce Mini-Gemini, a simple and effective framework enhancing multi-modality Vision Language Models (VLMs). Despite the advancements in VLMs facilitating basic visual dialog and reasoning, a performance gap persists compared to advanced models like GPT-4 and Gemini. We try to narrow the gap by mining the potential of VLMs for b…
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
1
InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models
20:46
20:46
나중에 재생
나중에 재생
리스트
좋아요
좋아요
20:46
We present InstantMesh, a feed-forward framework for instant 3D mesh generation from a single image, featuring state-of-the-art generation quality and significant training scalability. By synergizing the strengths of an off-the-shelf multiview diffusion model and a sparse-view reconstruction model based on the LRM architecture, InstantMesh is able …
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
1
Ep. 111- Nobody Is As Hot As Harry Dresden (Storm Front)
2:08:39
2:08:39
나중에 재생
나중에 재생
리스트
좋아요
좋아요
2:08:39
The Year of the Dresden is finally here! Hannah and Laura are covering the first third of Jim Butcher's Storm Front and friends, this book is a romp. Laura also makes a bold claim about television shows that she watched growing up and gushes about a new favorite book. Hannah has delved into a lot of recommendations that were given to her by Laura a…
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
1
From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples
36:41
36:41
나중에 재생
나중에 재생
리스트
좋아요
좋아요
36:41
We analyze how well pre-trained large language models (e.g., Llama2, GPT-4, Claude 3, etc) can do linear and non-linear regression when given in-context examples, without any additional training or gradient updates. Our findings reveal that several large language models (e.g., GPT-4, Claude 3) are able to perform regression tasks with a performance…
…
continue reading
Hey pals! Today, we're reeled in by listener suggestion HOOK, LINE & SINKER by Tessa Bailey, the sister sequel to IT HAPPENED ONE SUMMER. Fox and Hannah take us on a steamy friends to lovers book filled with sea shanties, the Pacific Northwest (with accompanying Twilight vibes), and plenty of angst. Enjoy the show! Ep. 38 - It Happened One Summer b…
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
Researchers have made significant progress in automating the software development process in the past decades. Automated techniques for issue summarization, bug reproduction, fault localization, and program repair have been built to ease the workload of developers. Recent progress in Large Language Models (LLMs) has significantly impacted the devel…
…
continue reading
1
TrustLLM: Trustworthiness in Large Language Models
2:48:17
2:48:17
나중에 재생
나중에 재생
리스트
좋아요
좋아요
2:48:17
Large language models (LLMs), exemplified by ChatGPT, have gained considerable attention for their excellent natural language processing capabilities. Nonetheless, these LLMs present many challenges, particularly in the realm of trustworthiness. Therefore, ensuring the trustworthiness of LLMs emerges as an important topic. This paper introduces Tru…
…
continue reading
RED BUBBLE STORE: https://rdbl.co/2BXMEkq DISCORD: https://discord.com/invite/uWZkb2a 4:50 - Read It On Reddit 16:12 - Ask Reddit 21:59 - Today I Advice 28:40 - Shower Thoughts 34:22 - Podnapping: Mythbusters AMA - readitpodcast@gmail.com - Ask Us Anything! LET YOUR GUARD DOWN!저자 Read It Podcasts
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
1
AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation
11:57
11:57
나중에 재생
나중에 재생
리스트
좋아요
좋아요
11:57
In this study, we propose AniPortrait, a novel framework for generating high-quality animation driven by audio and a reference portrait image. Our methodology is divided into two stages. Initially, we extract 3D intermediate representations from audio and project them into a sequence of 2D facial landmarks. Subsequently, we employ a robust diffusio…
…
continue reading
1
367: Chioma Okereke - Water Baby & Percival Everett on The Trees: A Novel
57:44
57:44
나중에 재생
나중에 재생
리스트
좋아요
좋아요
57:44
Read On this week features Chioma Okereke with her coming of age audio book Water Baby set in the floating slum of Makoko in Lagos. We hear about The Trees: A Novel by Percival Everett and find new books entering the RNIB Library.저자 RNIB Connect Radio
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
Generating long-form 44.1kHz stereo audio from text prompts can be computationally demanding. Further, most previous works do not tackle that music and sound effects naturally vary in their duration. Our research focuses on the efficient generation of long-form, variable-length stereo music and sounds at 44.1kHz using text prompts with a generative…
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
1
Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians
35:11
35:11
나중에 재생
나중에 재생
리스트
좋아요
좋아요
35:11
Creating high-fidelity 3D head avatars has always been a research hotspot, but there remains a great challenge under lightweight sparse view setups. In this paper, we propose Gaussian Head Avatar represented by controllable 3D Gaussians for high-fidelity head avatar modeling. We optimize the neutral 3D Gaussians and a fully learned MLP-based deform…
…
continue reading
1
BONUS EPISODE- "Only one bed. 100%. All the time." Chatting About Queer Romances with Dani Finn
1:14:17
1:14:17
나중에 재생
나중에 재생
리스트
좋아요
좋아요
1:14:17
Hannah and Laura had so much fun talking with the wonderful Dani Finn about queer romance as a genre, queer characters, and tropes! Dani shares about their writing journey, talks publishing and some of their favorite books featuring queer love. If you're looking to add to your TBR, this is the episode you need to listen to! Be sure to follow Dani a…
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
Parameter-efficient fine-tuning (PEFT) methods seek to adapt large models via updates to a small number of weights. However, much prior interpretability work has shown that representations encode rich semantic information, suggesting that editing representations might be a more powerful alternative. Here, we pursue this hypothesis by developing a f…
…
continue reading
Welcome to Episode 205! April is National Poetry Month and we are here for it. Emily is currently reading YOU ARE HERE: Poetry in the Natural World, a new anthology edited by Ada Limón, and Chris is reading BOATS FOR WOMEN by Sandra Yannone.Since our last episode, Chris finished listening to WAKE UP WITH PURPOSE! What I’ve Learned in my First Hundr…
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
Large language models (LLMs) often generate content that contains factual errors when responding to fact-seeking prompts on open-ended topics. To benchmark a model's long-form factuality in open domains, we first use GPT-4 to generate LongFact, a prompt set comprising thousands of questions spanning 38 topics. We then propose that LLM agents can be…
…
continue reading
RED BUBBLE STORE: https://rdbl.co/2BXMEkq DISCORD: https://discord.com/invite/uWZkb2a 1:59 - Read It On Reddit 22:28 - Ask Reddit 35:06 - Today I Advice 39:29 - Shower Thoughts 48:40 - Podnapping: r/trivia - Absurd American Trivia AMA - readitpodcast@gmail.com - Ask Us Anything! LET YOUR GUARD DOWN!저자 Read It Podcasts
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage …
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
1
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
36:22
36:22
나중에 재생
나중에 재생
리스트
좋아요
좋아요
36:22
Recently years have witnessed a rapid development of large language models (LLMs). Despite the strong ability in many language-understanding tasks, the heavy computational burden largely restricts the application of LLMs especially when one needs to deploy them onto edge devices. In this paper, we propose a quantization-aware low-rank adaptation (Q…
…
continue reading
In today's show we're re-joined by Louise Hare as she tells us about the sequel to Miss Aldridge Regrets, Harlem After Midnight. Plus we delve into the archive to talk to Sara Collins about The Confessions of Frannie Langton, and find some brand new audiobooks that are also available from RNIB Talking Books.…
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
1
MegaBlocks: Efficient Sparse Training with Mixture-of-Experts
41:52
41:52
나중에 재생
나중에 재생
리스트
좋아요
좋아요
41:52
We present MegaBlocks, a system for efficient Mixture-of-Experts (MoE) training on GPUs. Our system is motivated by the limitations of current frameworks, which restrict the dynamic routing in MoE layers to satisfy the constraints of existing software and hardware. These formulations force a tradeoff between model quality and hardware efficiency, a…
…
continue reading
Listen to our Daily Bible Readings on Youtube! Support the show저자 Paddy
…
continue reading
1
VoiceCraft: Zero-Shot Speech Editing and Text-to-Speech in the Wild
38:32
38:32
나중에 재생
나중에 재생
리스트
좋아요
좋아요
38:32
We introduce VoiceCraft, a token infilling neural codec language model, that achieves state-of-the-art performance on both speech editing and zero-shot text-to-speech (TTS) on audiobooks, internet videos, and podcasts. VoiceCraft employs a Transformer decoder architecture and introduces a token rearrangement procedure that combines causal masking a…
…
continue reading
1
Indie Intermission Ep. 15- "Her power is her perseverance.” An interview with Bryan S. Glosemeyer (Before the Shattered Gates of Heaven, Vol. 1)
1:12:49
1:12:49
나중에 재생
나중에 재생
리스트
좋아요
좋아요
1:12:49
Indiemission #5 is coming to a close, but not without an awesome author interview! This week Hannah and Laura are thrilled to have Bryan S. Glosemeyer on the pod to talk about reading, his writing journey, and of course, Before the Shattered Gates of Heaven. They chat about the book's characters, themes, inspirations for the series and what's to co…
…
continue reading