Read On 공개
[search 0]
Download the App!
show episodes
 
Keeping you up to date with the latest trends and best performing architectures in this fast evolving field in computer science. Selecting papers by comparative results, citations and influence we educate you on the latest research. Consider supporting us on Patreon.com/PapersRead for feedback and ideas.
  continue reading
 
A weekly show all about audiobooks recorded at the RNIB Talking Book studios. We talk to your favourite authors and narrators, along with reviews and news about new audiobooks. Presented and produced by Robert Kirkwood, you'll find a new episode here every Friday at 1pm plus bonus content such as longer uncut interviews and episodes of our occasional extra show, The Book Group. Talking Books is a free service from RNIB giving access to over 40,000 fiction and non fiction books for adults and ...
  continue reading
 
Left On Read is a podcast where I'll be talking all my bookish thoughts. Let's hang 🖤 • (Yes, I did rebrand the cover art. I wanted something simpler and less busy.) • https://instagram.com/ashleyyyreads
  continue reading
 
Artwork
 
Welcome to We Read It One Night, the bookish comedy podcast where sisters Alison and Rachel introduce you to the next romance novel that’ll make you want to stay up all night reading. Subscribe! Follow! Rate! Review! Tell your friends about us! Instagram: @wereaditonenight Twitter: @wereaditpodcast Facebook: We Read It One Night TikTok: @wereaditonenight Email: wereaditonenight [at] gmail.com
  continue reading
 
On Read This One, we read all of our favorite children's books for you to follow along with before bed, in the car, or wherever! Its a perfect podcast for children learning to read. Have a favorite book you want us to read? Great! You can email us at readthisonepodcast@gmail.com to tell us what you want to hear. Let's discover new books together! Support this podcast: https://podcasters.spotify.com/pod/show/samuel289/support
  continue reading
 
Loading …
show series
 
Large-scale recommendation systems are characterized by their reliance on high cardinality, heterogeneous features and the need to handle tens of billions of user actions on a daily basis. Despite being trained on huge volume of data with thousands of features, most Deep Learning Recommendation Models (DLRMs) in industry fail to scale with compute.…
  continue reading
 
Welcome to Episode 206 where we have a fantastic conversation with Rebecca Rego Barry, author of THE VANISHING OF CAROLYN WELLS: Investigations into a Forgotten Mystery Author. One reviewer referred to Barry’s book as a “process biography.” It is true, Barry takes you along on her investigation into the life of Carolyn Wells who, it turns out, wrot…
  continue reading
 
It's TV Tuesday, so Hannah and Laura decided to hop in the Tardis and cover the first series of Doctor Who! They chat about the actors, favorite episodes, themes, and of course, the Daleks. ***This podcast episode contains SPOILERS for Doctor Who series 1.*** Media Mentions: Doctor Who series 1---Max The Lord of the Rings movies---Max Community---N…
  continue reading
 
We study how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages. This underexplored problem poses new challenges at the pre-writing stage, including how to research the topic and prepare an outline prior to writing. We propose STORM, a writing system f…
  continue reading
 
In this work, we introduce Mini-Gemini, a simple and effective framework enhancing multi-modality Vision Language Models (VLMs). Despite the advancements in VLMs facilitating basic visual dialog and reasoning, a performance gap persists compared to advanced models like GPT-4 and Gemini. We try to narrow the gap by mining the potential of VLMs for b…
  continue reading
 
We present InstantMesh, a feed-forward framework for instant 3D mesh generation from a single image, featuring state-of-the-art generation quality and significant training scalability. By synergizing the strengths of an off-the-shelf multiview diffusion model and a sparse-view reconstruction model based on the LRM architecture, InstantMesh is able …
  continue reading
 
The Year of the Dresden is finally here! Hannah and Laura are covering the first third of Jim Butcher's Storm Front and friends, this book is a romp. Laura also makes a bold claim about television shows that she watched growing up and gushes about a new favorite book. Hannah has delved into a lot of recommendations that were given to her by Laura a…
  continue reading
 
We analyze how well pre-trained large language models (e.g., Llama2, GPT-4, Claude 3, etc) can do linear and non-linear regression when given in-context examples, without any additional training or gradient updates. Our findings reveal that several large language models (e.g., GPT-4, Claude 3) are able to perform regression tasks with a performance…
  continue reading
 
Hey pals! Today, we're reeled in by listener suggestion HOOK, LINE & SINKER by Tessa Bailey, the sister sequel to IT HAPPENED ONE SUMMER. Fox and Hannah take us on a steamy friends to lovers book filled with sea shanties, the Pacific Northwest (with accompanying Twilight vibes), and plenty of angst. Enjoy the show! Ep. 38 - It Happened One Summer b…
  continue reading
 
Researchers have made significant progress in automating the software development process in the past decades. Automated techniques for issue summarization, bug reproduction, fault localization, and program repair have been built to ease the workload of developers. Recent progress in Large Language Models (LLMs) has significantly impacted the devel…
  continue reading
 
Large language models (LLMs), exemplified by ChatGPT, have gained considerable attention for their excellent natural language processing capabilities. Nonetheless, these LLMs present many challenges, particularly in the realm of trustworthiness. Therefore, ensuring the trustworthiness of LLMs emerges as an important topic. This paper introduces Tru…
  continue reading
 
RED BUBBLE STORE: https://rdbl.co/2BXMEkq DISCORD: https://discord.com/invite/uWZkb2a 4:50 - Read It On Reddit 16:12 - Ask Reddit 21:59 - Today I Advice 28:40 - Shower Thoughts 34:22 - Podnapping: Mythbusters AMA - readitpodcast@gmail.com - Ask Us Anything! LET YOUR GUARD DOWN!저자 Read It Podcasts
  continue reading
 
In this study, we propose AniPortrait, a novel framework for generating high-quality animation driven by audio and a reference portrait image. Our methodology is divided into two stages. Initially, we extract 3D intermediate representations from audio and project them into a sequence of 2D facial landmarks. Subsequently, we employ a robust diffusio…
  continue reading
 
Generating long-form 44.1kHz stereo audio from text prompts can be computationally demanding. Further, most previous works do not tackle that music and sound effects naturally vary in their duration. Our research focuses on the efficient generation of long-form, variable-length stereo music and sounds at 44.1kHz using text prompts with a generative…
  continue reading
 
Creating high-fidelity 3D head avatars has always been a research hotspot, but there remains a great challenge under lightweight sparse view setups. In this paper, we propose Gaussian Head Avatar represented by controllable 3D Gaussians for high-fidelity head avatar modeling. We optimize the neutral 3D Gaussians and a fully learned MLP-based deform…
  continue reading
 
Hannah and Laura had so much fun talking with the wonderful Dani Finn about queer romance as a genre, queer characters, and tropes! Dani shares about their writing journey, talks publishing and some of their favorite books featuring queer love. If you're looking to add to your TBR, this is the episode you need to listen to! Be sure to follow Dani a…
  continue reading
 
Parameter-efficient fine-tuning (PEFT) methods seek to adapt large models via updates to a small number of weights. However, much prior interpretability work has shown that representations encode rich semantic information, suggesting that editing representations might be a more powerful alternative. Here, we pursue this hypothesis by developing a f…
  continue reading
 
Welcome to Episode 205! April is National Poetry Month and we are here for it. Emily is currently reading YOU ARE HERE: Poetry in the Natural World, a new anthology edited by Ada Limón, and Chris is reading BOATS FOR WOMEN by Sandra Yannone.Since our last episode, Chris finished listening to WAKE UP WITH PURPOSE! What I’ve Learned in my First Hundr…
  continue reading
 
Large language models (LLMs) often generate content that contains factual errors when responding to fact-seeking prompts on open-ended topics. To benchmark a model's long-form factuality in open domains, we first use GPT-4 to generate LongFact, a prompt set comprising thousands of questions spanning 38 topics. We then propose that LLM agents can be…
  continue reading
 
RED BUBBLE STORE: https://rdbl.co/2BXMEkq DISCORD: https://discord.com/invite/uWZkb2a 1:59 - Read It On Reddit 22:28 - Ask Reddit 35:06 - Today I Advice 39:29 - Shower Thoughts 48:40 - Podnapping: r/trivia - Absurd American Trivia AMA - readitpodcast@gmail.com - Ask Us Anything! LET YOUR GUARD DOWN!저자 Read It Podcasts
  continue reading
 
We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage …
  continue reading
 
Recently years have witnessed a rapid development of large language models (LLMs). Despite the strong ability in many language-understanding tasks, the heavy computational burden largely restricts the application of LLMs especially when one needs to deploy them onto edge devices. In this paper, we propose a quantization-aware low-rank adaptation (Q…
  continue reading
 
In today's show we're re-joined by Louise Hare as she tells us about the sequel to Miss Aldridge Regrets, Harlem After Midnight. Plus we delve into the archive to talk to Sara Collins about The Confessions of Frannie Langton, and find some brand new audiobooks that are also available from RNIB Talking Books.…
  continue reading
 
We present MegaBlocks, a system for efficient Mixture-of-Experts (MoE) training on GPUs. Our system is motivated by the limitations of current frameworks, which restrict the dynamic routing in MoE layers to satisfy the constraints of existing software and hardware. These formulations force a tradeoff between model quality and hardware efficiency, a…
  continue reading
 
We introduce VoiceCraft, a token infilling neural codec language model, that achieves state-of-the-art performance on both speech editing and zero-shot text-to-speech (TTS) on audiobooks, internet videos, and podcasts. VoiceCraft employs a Transformer decoder architecture and introduces a token rearrangement procedure that combines causal masking a…
  continue reading
 
Indiemission #5 is coming to a close, but not without an awesome author interview! This week Hannah and Laura are thrilled to have Bryan S. Glosemeyer on the pod to talk about reading, his writing journey, and of course, Before the Shattered Gates of Heaven. They chat about the book's characters, themes, inspirations for the series and what's to co…
  continue reading
 
Loading …

빠른 참조 가이드