Babbage is our weekly podcast on science and technology, named after Charles Babbage—a 19th-century polymath and grandfather of computing. Host Alok Jha talks to our correspondents about the innovations, discoveries and gadgetry shaping the world. Published every Wednesday. If you’re already a subscriber to The Economist, you’ll have full access to all our shows as part of your subscription. For more information about Economist Podcasts+, including how to get access, please visit our FAQs pa ...
…
continue reading
Player FM - Internet Radio Done Right
11 subscribers
Checked 10d ago
추가했습니다 four 년 전
APNIC에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 APNIC 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!
Player FM 앱으로 오프라인으로 전환하세요!
들어볼 가치가 있는 팟캐스트
스폰서 후원
<
<div class="span index">1</div> <span><a class="" data-remote="true" data-type="html" href="/series/tinfoil-swans">Tinfoil Swans</a></span>


Food & Wine has led the conversation around food, drinks, and hospitality in America and around the world since 1978. Tinfoil Swans continues that legacy with a new series of intimate, informative, surprising, and uplifting conversations with the biggest names in the culinary industry, sharing never-before-heard stories about the successes, struggles, and fork-in-the-road moments that made them who they are today. Each week, you'll hear from icons and innovators like Daniel Boulud, Guy Fieri, Mashama Bailey, and Maneet Chauhan, going deep on their formative experiences, the dishes and meals that made them, their joys, doubts and dreams, and what's still on the menu for them. Tune in for a feast that'll feed your brain and soul — and plenty of wisdom and quotable morsels to savor later. New episodes every Tuesday.
PING
모두 재생(하지 않음)으로 표시
Manage series 3001389
APNIC에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 APNIC 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
PING is a podcast for people who want to look behind the scenes into the workings of the Internet. Each fortnight we will chat with people who have built and are improving the health of the Internet. The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
…
continue reading
87 에피소드
모두 재생(하지 않음)으로 표시
Manage series 3001389
APNIC에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 APNIC 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
PING is a podcast for people who want to look behind the scenes into the workings of the Internet. Each fortnight we will chat with people who have built and are improving the health of the Internet. The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
…
continue reading
87 에피소드
모든 에피소드
×In this episode of PING, APNIC’s Chief Scientist, Geoff Huston , revisits changes underway in how the Domain Name System (DNS) delegates authority over a given zone and how resolvers discover the new authoritative sources. We last explored this in March 2024. In DNS, the word ‘domain’ refers to a scope of authority. Within a domain, everything is governed by its delegated authority. While that authority may only directly manage its immediate subdomains (children), its control implicitly extends to all subordinate levels (grandchildren and beyond). If a parent domain withdraws delegation from a child, everything beneath that child disappears. Think of it like a Venn diagram of nested circles — being a subdomain means being entirely within the parent’s scope. The issue lies in how this delegation is handled. It’s by way of nameserver (NS) records. These are both part of the child zone (where they are defined) and the parent zone (which must reference them). This becomes especially tricky with DNSSEC. The parent can’t authoritatively sign the child’s NS records because they are technically owned by the child. But if the child signs them, it breaks the trust chain from the parent. Another complication is the emergence of third parties to the delegate, who actually operate the machinery of the DNS. We need mechanisms to give them permission to make changes to operational aspects of delegation, but not to hold all the keys a delegate has regarding their domain name. A new activity has been spun up in the IETF to discuss how to alter this delegation problem by creating a new kind of DNS record, the DELEG record. This is proposed to follow the Service Binding model defined in RFC 9460. Exactly how this works and what it means for the DNS is still up in the air. DELEG could fundamentally change how authoritative answers are discovered, how DNS messages are transported, and how intermediaries interact with the DNS ecosystem. In the future, significant portions of DNS traffic might flow over new protocols, introducing novel behaviours in the relationships between resolvers and authoritative servers. Read more about DELEG on the APNIC Blog and the web: DNS and the proposed DELEG record (APNIC Blog, February 2024) DELEG Working Group Charter (IETF Website) Service Binding and Parameter Specification via the DNS (IETF RFC 9460)…
In this episode of PING, Professor Cristel Pelsser who holds the chair of critical embedded systems at UCLouvain Discusses her work measuring BGP and in particular the system described in the 2024 SIGCOMM “best paper” award winning research: “The Next Generation of BGP Data Collection Platforms” Cristel and her collaborators Thomas Alfroy, Thomas Holterbach, Thomas Krenc and K. C. Claffy have built a system they call GILL, available on the web at https://bgproutes.io This work also features a new service called MVP, to help find the “most valuable vantage point” in the BGP collection system for your particular needs. GILL has been designed for scale, and will be capable of encompassing thousands of peerings. it also has an innovative approach to holding BGP data, focussed on the removal of demonstrably redundant information, and therefore significantly higher compression of the data stream compared to e.g. holding MRT files. The MVP system exploits machine learning methods to aide in the selection of the most advantageous data collection point reflecting a researchers specific needs. Application of ML methods here permits a significant amount of data to be managed and change reflected in the selection of vantage points. Their system has already been able to support DFOH, an approach to finding forged origin attacks from peering relationships seen online in BGP, as opposed to the peering expected both from location, and declarations of intent inside systems like peeringDB. Read more about Cristel’s work, and their BGP analysis tools on the web: The Next Generation of BGP Data Collection Platforms (Best Paper Award at ACM SIGCOMM 2024 ) bgproutes.io (web portal to GILL, MVP and DFOH systems) Measuring Internet Routing from the Most Valuable Points A system to Detect Forged-Origin Hijacks (DFOH)…
In this episode of PING, APNIC’s Chief Scientist, Geoff Huston , discusses the history and emerging future of how Internet protocols get more than the apparent link bandwidth by using multiple links and multiple paths. Initially, the model was quite simple, capable of handling up to four links of equal cost and delay reasonably well, typically to connect two points together. At the time, the Internet was built on telecommunications services originally designed for voice networks, with cabling laid between exchanges, from exchanges to customers, or across continents. This straightforward technique allowed the Internet to expand along available cable or fibre paths between two points. However, as the system became more complex, new path options emerged, and bandwidth demands grew beyond the capacity of individual or even equal-cost links, increasingly sophisticated methods for managing these connections had to be developed. An interesting development at the end of this process is the impact of a fully encrypted transport layer on the intervening infrastructure’s ability to manage traffic distribution across multiple links. With encryption obscuring the contents of the dataflow, traditional methods for intelligently splitting traffic become less effective. Randomly distributing data can often worsen performance, as modern techniques rely on protocols like TCP to sustain high-speed flows by avoiding data misordering and packet loss. This episode of PING explores how Internet protocols boost bandwidth by using multiple links and paths, and how secure transport layers affect this process. Read more about multipath network protocols on the web: IETF Draft on Multipath for QUIC (IETF, April 2025) Multipath TCP: Revolutionising connectivity one path at a time (Cloudflare Blog, January 2025) RFC 8684 (IETF, 2020)…

1 Pulse Internet Measurement Forum at APRICOT 2025: Part 2 40:59
40:59
나중에 재생
나중에 재생
리스트
좋아요
좋아요40:59
Last month, during APRICOT 2025 / APNIC 59 , the Internet Society hosted its first Pulse Internet Measurement Forum (PIMF). PIMF brings together people interested in Internet measurement from a wide range of perspectives — from technical details to policy, governance, and social issues. The goal is to create a space for open discussion, uniting both technologists and policy experts. In this second special episode of PING, we continue our break from the usual one-on-one podcast format and present a recap of why the PIMF forum was held, and the last 3 short interviews from the workshop. First we hear a repeat of Amreesh Phokeer's presentation. Amreesh is from the Internet Society and discusses his role in managing the Pulse activity within ISOC. Alongside Robbie Mitchell , Amreesh helped organize the forum, aiming to foster collaboration between measurement experts and policy professionals. Next we hear from Beau Gieskens , a Senior Software Engineer from APNIC Information Products. Beau has been working on the DASH system and discusses his PIMF presentation on a re-design to an event-sourcing model which reduced database query load and improved speed and scaling of the service. We then have Doug Madory from Kentik who presented to PIMF on a quirk in how Internet Routing Registries or IRR are being used, which can cause massive costs in BGP filter configuration and is related to some recent route leaks being seen at large in the default free zone of BGP. Finally, we hear from Lia Hestina from the RIPE NCC Atlas project. Lia is the community Development officer, and focusses on Asia Pacific and Africa for the Atlas project. Lia discusses the Atlas system and how it underpins measurements worldwide, including ones discussed in the PIMF meeting. For more insights from PIMF, be sure to check out the PULSE Forum recording on the Internet Society YouTube feed…
In this episode of PING, APNIC’s Chief Scientist, Geoff Huston , discusses the surprisingly vexed question of how to say ‘no’ in the DNS. This conversation follows a presentation by Shumon Huque at the recent DNS OARC meeting, who will be on PING in a future episode talking about another aspect of the DNS protocol. You would hope this is a simple, straightforward answer to a question, but as usual with the DNS, there are more complexities under the surface. The DNS must indicate whether the labels in the requested name do not exist, whether the specific record type is missing, or both. Sometimes, it needs to state both pieces of information, while other times, it only needs to state one. The problem is made worse by the constraints of signing answers with DNSSEC. There needs to be a way to say ‘no’ authoritatively, and minimize the risk of leaking any other information. NSEC3 records are designed to limit this exposure by making it harder to enumerate an entire zone. Instead of explicitly listing ‘before’ and ‘after’ labels in a signed response denying a label’s existence, NSEC3 uses hashed values to obscure them. In contrast, the simpler NSEC model reveals adjacent labels, allowing an attacker to systematically map out all existing names — a serious risk for domain registries that depend on name confidentiality. This is documented in RFC 7129 . Saying ‘no’ with authority also raises the question of where signing occurs — at the zone’s centre (by the zone holder) or at the edge (by the zone server). These approaches lead to different solutions, each with its own costs and consequences. In this episode of PING, Geoff explores the differences between a non-standard, vendor-explored solution, and the emergence of a draft standard in how to say ‘no’ properly.…
At the APRICOT/APNIC59 meeting held in Petaling Jaya in Malaysia last month, The internet society held it's first PIMF meeting. PIMF, or the Pulse Internet Measurement Forum is a gathering of people interested in Internet measurement in the widest possible sense, from technical information all the way to policy, governance and social questions. ISOC is interested in creating a space for the discussion to take place amongst the community, and bring both technologists and policy specialists into the same room. This time on PING, instead of the usual one-on-one format of podcast we've got 5 interviews from this meeting, and after the next episode from Geoff Huston at APNIC Labs we'll play a second part, with 3 more of the presenters from this session. First up we have Amreesh Phokeer from the Internet Society who manages the PULSE activity in ISOC, and along with Robbie Mitchell set up the meeting. Then we hear from Christoph Visser from IIJ Labs in Tokyo, who presented on his measurements of the "Steam" Game distribution platform used by Valve Software to share games. It's a complex system of application-specific source selection, using multiple Content Distribution Networks (CDN) to scale across the world, and allows Christoph to see into the link quality from a public API. No extra measurements required, for an insight into the gamer community and their experience of the Internet. The third interview is with Anand Raje , from AIORI-IMN, India’s Indigenous Internet Measurement System. Anand leads a team which has built out a national measurement system using IoT "orchestration" methods to manage probes and anchors, in a virtual-environment which permits them to run multiple independent measurement systems hosted inside their platform. After this there's an interview with Andre Robachevsky from Global Cyber Alliance (GCA). Andre established the MANRS system, it's platform and nurtured the organisation into being inside ISOC. MANRS has now moved into the care of GCA and Andre moved with it, and discusses how this complements the existing GCA activities. FInally we have a conversation with Champika Wijayatunga from ICANN on the KINDNS project. This is a programme designed to bring MANRS-like industry best practice to the DNS community at large, including authoritative DNS delegates and the intermediate resolver and client supporting stub resolver operators. Champika is interested in reaching into the community to get KINDNS more widely understood and encourage its adoption with over 2,000 entities having completed the assessment process already. Next time we'll here from three more participants in the PIMF session: Doug Madory from Kentik, Beau Gieskins from APNIC Information Products, and Lia Hestina, from the RIPE NCC. PULSE Forum recording (Internet Society YouTube feed)…
In this episode of PING, APNIC’s Chief Scientist, Geoff Huston explores bgp "Zombies" which are routes which should have been removed, but are still there. They're the living dead of routes. How does this happen? Back in the early 2000s Gert Döring in the RIPE NCC region was collating a state of BGP for IPv6 report, and knew each of the 300 or so IPv6 announcements directly. He understood what should be seen, and what was not being routed. He discovered in this early stage of IPv6 that some routes he knew had been withdrawn in BGP still existed when he looked into the repositories of known routing state. This is some of the first evidence of a failure mode in BGP where withdrawal of information fails to propagate, and some number of BGP speakers do not learn a route has been taken down. They hang on to it. Because BGP is a protocol which only sends differences to the current routing state as and when they emerge (if you start afresh you get a LOT of differences, because it has to send everything from ground state of nothing. But after that, you're only told when new things come and old things go away) it can go a long time without saying anything about a particular route: if its stable and up, nothing to say, and if it was withdrawn, you don't have it, to tell people it's gone, once you passed that on. So if somehow in the middle of this conversation a BGP speaker misses something is gone, as long as it doesn't have to tell anyone it exists, nobody is going to know it missed the news. In more recent times, there has been a concern this may be caused by a problem in how BGP sits inside TCP messages and this has even led to an RFC in the IETF process to define a new way to close things out. Geoff isn't convinced this diagnosis is actually correct or that the remediation proposed is the right one. From a recent NANOG presentation Geoff has been thinking about the problem, and what to do. He has a simpler approach which may work better. Read more about BGP zombies at the APNIC Blog and the web: BGP Zombies at NANOG 93 (Geoff Huston, APNIC Blog February 2025) NANOG 93 presentation on BGP Zombies (Iliana Xygkou from Thousand Eyes, NANOG presentation) RFC9687 SendHold Timers (IETF RFC)…
In this episode, Job Snijders discusses RPKIViews , his long term project to collect the "views" of RPKI state every day, and maintain an archive of BGP route validation states. The project is named to reflect route views , the long-standing archive of BGP state maintained by the University of Oregon, which has been discussed on PING . Job is based in the Netherlands, and has worked in BGP routing for large international ISPs and content distribution networks as well as being a board member of the RIPE NCC. He is known for his work producing the Open-Source rpki-client RPKI Validator, implemented in C and distributed widely through the OpenBSD project . RPKI is the Resource PKI, Resource meaning the Internet Number Resources, the IPv4, IPv6 and Autonomous System (AS) numbers which are used to implement routing in the global internet. The PKI provides cryptographic proofs of delegation of these resources and allows the delegates to sign over their intentions originating specific prefixes in BGP, and the relationships between the AS which speak BGP to each other. Why rpkiviews? Job explains that there's a necessary conversation between people involved in the operational deployment of secure BGP, and the standards development and research community: How many of the worlds BGP routes are being protected? How many places are producing Route Origin Attestations (ROA) which are the primary cryptographic object used to perform Route Origin Validation (ROV) and how many objects are made? Whats the error rate in production, the rate of growth, a myriad of introspective "meta" questions need to be asked in deploying this kind of system at scale, and one of the best tools to use, is an archive of state, updated frequently, and as for route views collected from a diverse range of places worldwide, to understand the dynamics of the system. Job is using the archive to produce his annual "RPKI Year in review" report, which was published this year on the APNIC Blog (it's posted to operations, research and standards development mailing lists and presented at conferences and meetings normally) and products are being used by the BGPAlerter service developed by Massimo Candela Read about the rpkiviews archive on the APNIC Blog, and on the web: RPKI's 2024 Year in review - (Job Snijders, APNIC Blog January 2025) RPKIViews - (the RPKI views Web archive)…
In his first episode of PING for 2025, APNIC’s Chief Scientist, Geoff Huston returns to the Domain Name System (DNS) and explores the many faces of name servers behind domains. Up at the root, (the very top of the namespace, where all top-level domains like .gov or .au or .com are defined to exist) there is a well established principle of 13 root nameservers. Does this mean only 13 hosts worldwide service this space? Nothing could be farther from the truth! literally thousands of hosts act as one of those 13 root server labels, in a highly distributed worldwide mesh known as "anycast" which works through BGP routing. The thing is, exactly how the number of nameservers for any given domain is chosen, and how resolvers (the querying side of the DNS, the things which ask questions of authoritative nameservers) decide which one of those servers to use isn't as well defined as you might think. The packet sizes, the order of data in the packet, how it's encoded is all very well defined, but "which one should I use from now on, to answer this kind of question" is really not well defined at all. Geoff has been using the Labs measurement system to test behaviour here, and looking at basic numbers for the delegated domains at the root. The number of servers he sees, their diversity, the nature of their deployment technology in routing is quite variable. But even more interestingly, the diversity of "which one gets used" on the resolver side suggests some very old, out of date and over-simplistic methods are still being used almost everywhere, to decide what to do. Read more about Geoff's research on DNS nameserver selection and diversity on the APNIC Blog: DNS nameservers: Service performance and resilience (Geoff Huston, APNIC Blog February 2025)…
Welcome back to PING, at the start of 2025. In this episode, Gautam Akiwate , (now with Apple, but at the time of recording with Stanford University) talks about the 2021 Advanced Network Research Prize winning paper, co-authored with Stefan Savage, Geoffrey Voelker and Kimberly Claffy which was titled "Risky BIZness: Risks Derived from Registrar Name Management". The paper explores a situation which emerged inside the supply chain behind DNS name delegation, in the use of an IETF protocol called Extensible Provisioning Protocol or EPP . EPP is implemented in XML over the SOAP mechanism, and is how registry-registrar communications take place, on behalf of a given domain name holder (the delegate) to record which DNS nameservers have the authority to publish the delegated zone. The problem doesn't lie in the DNS itself, but in the operational practices which emerged in some registrars, to remove dangling dependencies in the systems when domain names were de-registered. In effect they used an EPP feature to rename the dependency, so they could move on with selling the domain name to somebody else. The problem is that feature created valid names, which could themselves then be purchased. For some number of DNS consumers, those new valid nameservers would then be permitted to serve the domain, and enable attacks on the integrity of the DNS and the web. Gautam and his co-authors explored a very interesting quirk of the back end systems and in the process helped improve the security of the DNS and identified weaknesses in a long-standing "daily dump" process to provide audit and historical data. Read more about RISKY BIZness and the supply chain attack on the web: The 2021 ANRP paper "Risky BIZness: Risks Derived from Registrar Name Management " 2017 Grand Jury indictment of Zhang et al 2022 IMC paper "Retroactive Identification of Targeted DNS Infrastructure Hijacking The prevalence, persistence, and perils of lame delegations (APNIC blog, 2021)…
In the last episode of PING for 2024, APNIC’s Chief Scientist Geoff Huston discusses the shift from existing public-private key cryptography using the RSA and ECC algorithms to the world of ‘Post Quantum Cryptography. These new algorithms are designed to withstand potential attacks from large-scale quantum computers and are capable of implementing Shor’s algorithm , a theoretical approach for using quantum computing to break the cryptographic keys of RSA and ECC. Standards agencies like NIST are pushing to develop algorithms that are both efficient on modern hardware and resistant to the potential threats posed by Shor’s Algorithm in future quantum computers. This urgency stems from the need to ensure ‘perfect forward secrecy’ for sensitive data — meaning that information encrypted today remains secure and undecipherable even decades into the future. To date, maintaining security has been achieved by increasing the recommended key length as computing power improved under Moore’s Law, with faster processors and greater parallelism. However, quantum computing operates differently and will be capable of breaking the encryption of current public-private key methods, regardless of the key length. Public-private keys are not used to encrypt entire messages or datasets. Instead, they encrypt a temporary ‘ephemeral’ key, which is then used by a symmetric algorithm to secure the data. Symmetric key algorithms (where the same key is used for encryption and decryption) are not vulnerable to Shor’s Algorithm. However, if the symmetric key is exchanged using RSA or ECC — common in protocols like TLS and QUIC when parties lack a pre-established way to share keys — quantum computing could render the protection ineffective. A quantum computer could intercept and decrypt the symmetric key, compromising the entire communication. Geoff raises concerns that while post-quantum cryptography is essential for managing risks in many online activities — especially for protecting highly sensitive or secret data—it might be misapplied to DNSSEC. In DNSSEC, public-private keys are not used to protect secrets but to ensure the accuracy of DNS data in real-time. If there’s no need to worry about someone decoding these keys 20 years from now, why invest significant effort in adapting DNSSEC for a post-quantum world? Instead, he questions whether simply using longer RSA or ECC keys and rotating key pairs more frequently might be a more practical approach. Read more about Post-Quantum Cryptography and DNSSEC on the APNIC blog and the web. Post-Quantum Cryptography (Geoff Huston, APNIC Blog November 2024) [Podcast] Testing Post-Quantum Cryptography DNSSEC (Podcast July 2024) A quantum-safe cryptography DNSSEC testbed ( Caspar Schutijser , APNIC Blog 2024) [Podcast] The SIDN Labs post-quantum DNSSEC testbed (Podcast August 2024) Quantum Computing and the DNS (Paul Hoffman, office of the CTO, ICANN April 2024) PING will return in early 2025 This is the last episode of PING for 2024, we hope you’ve enjoyed listening. The first episode of our new series is expected in late January 2025. In the meantime, catch up on all past episodes .…

1 Measuring DNSSEC keying "drift" between parent and child 36:26
36:26
나중에 재생
나중에 재생
리스트
좋아요
좋아요36:26
This time on PING, Peter Thomassen from SSE and DEsec.io discusses his analysis of the failure modes of CDS and CDNSKEY records between parent and child in the DNS. These records are used to provide in-band signalling of the DS record, fundamental to the maintenance of a secure path from the trust anchor to the delegation through all the intermediate parent and grandparent domains. Many people use out-of-band methods to update this DS information, but the CDS and the CDNSKEY records are designed to signal this critical information inside the DNS, avoiding many of the pitfalls of passing through a registry-registrar web service. The problem is, as Peter has discovered, the information across the various nameservers (denoted by the NS record in the DNS) of the child domain can get out of alignment, and the tests a parent zone need to do checking CDS and CDNSKEY information aren't sufficiently specified to wire down this risk. Peter performed a "meta analysis" inside a far larger cohort of DNS data captured by Florian Steurer and Tobias Fiebig at the Max Planck Institute and discovered a low but persisting error rate, a drift in the critical keying information between a zones NS and the parent. Some of these related to transitional states in the DNS (such as when you move registry or DNS provider) but by no means all, and this has motivated Peter and his co-authors to look at improved recommendations for managing CDS/CDNSKEY data, to minimise the risk of inconsistency, and the consequent loss of secure entry path to a domain name. Read more about DNSSEC delegation at the APNIC Blog, and the IETF: Authenticated bootstrapping of DNSSEC delegations (NIls Wisiol, APNIC Blog March 2022) Measurement of CDS/CDNSKEY inconsistencies (IETF119 Presentation, March 2024) Generalised DNS NOTIFY (IETF Draft)…
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston discusses the slowdown in worldwide IPv6 uptake. Although within the Asia-Pacific footprint we have some truly remarkable national statistics, such as India which is now over 80% IPv6 enabled by APNIC Labs measurements, And Vietnam which is not far behind on 70% the problem is that worldwide, adjusted for population and considering levels of internet penetration in the developed economies, the pace of uptake overall has not improved and has been essentially linear since 2016 . In some economies like the US, a natural peak of around 50% capability was reached in 2017 and since then uptake has been essentially flat: There is no sign of closure to a global deployment in the US, and many other economies. Geoff takes a high level view of the logisitic supply curve with the early adopters, early and late majority, and laggards, and sees no clear signal that there is a visible endpoint, where a transition to IPv6 will be "done". Instead we're facing a continual dual-stack operation of both IPv4 (increasingly behind Carrier Grade Nats (CGN) deployed inside the ISP) and IPv6. There are success stories in mobile (such as seen in India) and in broadband with central management of the customer router. But, it seems that with the shift in the criticality of routing and numbering to a more name-based steering mechanism and the continued rise of content distribution networks, the pace of IPv6 uptake worldwide has not followed the pattern we had planned for. Read more about the IPv6 transition at the APNIC Blog The IPv6 Transition (Geoff Huston, APNIC Blog November 2024) The Transition to IPv6 are we there yet (Geoff Huston, APNIC Blog May 2022)…
In this episode of PING, Vanessa Fernandez and Kavya Bhat, two students from the National Institute of Technology Karnataka (NITK) discuss the student led, multi-year project to deploy IPv6 at their campus. Kavya & Vanessa have just graduated, and are moving into their next stages of work and study in computer sciences and network engineering. Across 2023 and 2024 they were able to attend IETF118 and IETF119 and present on their project and it’s experiences to the IPv6 working groups and off-Working Group meetings, in part funded by the APNIC ISIF Project and the APNIC Foundation. This multi-year project is supervised by the NITK Centre for Open-source Software and Hardware (COSH) and has outside review from Dhruv Dhody (ISOC) and Nalini Elkins (Inside Products inc). Former students have also acted as alumni and remain involved in the project as it progresses. We often focus on IPv6 deployment at scale in the telco sector, or experiences with small deployments in labs, but another side of the IPv6 experience is the large campus network, in scale equivalent to a significant factory or government department deployment but in this case undertaken by volunteer staff, with little or no prior experience of networking technology. Vanessa and Kavya talk about their time on the project, and what they got to present at IETF. Read more information on the NITK and their IPv6 deployment project on the APNIC Blog, the IETF website and the APNIC Foundation pages: Migrating the NITK Surathkal Campus Network to IPv6 (APNIC Foundation) How Deploying IPv6 at NITK Led me to IETF (Vanessa Fernandez, APNIC Blog) IPv6 Deployment at NITK (IETF118 Presentation)…

1 The back of the class: looking at 240/4 reachability 1:09:20
1:09:20
나중에 재생
나중에 재생
리스트
좋아요
좋아요1:09:20
In his regular monthly spot on PING, APNIC’s Chief Scientist, Geoff Huston , discusses a large pool of IPv4 addresses left in the IANA registry, from the classful allocation days back in the mid 1980s. This block, from 240.0.0.0 to 255.255.255.255 encompasses 268 million hosts, which is a significant chunk of address space: it's equivalent to 16 class-A blocks, each of 16 million hosts. Seems a shame to waste it, how about we get this back into use? Back in 2007 Geoff Paul and myself submitted An IETF Draft which would have removed these addresses from the "reserved" status in IANA and used to supplement the RFC1918 private use block. We felt at the time this was the best use of these addresses because of their apparent un-routability, in the global internet. Almost all IP network stacks at that time shared a lineage with the BSD network code developed at the University of California, and released in 1983 as BSD4.2. Subsequent versions of this codebase included a 2 or 3 line rule inside the Kernel which checked the top 4 bits of the 32 bit address field, and refused to forward packets which had these 4 bits set. This reflected the IANA status marking this range as reserved. The draft did not achieve consensus. A more recent proposal has emerged from Seth Schoen, David Täht and John Gilmore in 2021 which continues to be worked on, but rather than assigning to RFC1918 internal non-routable puts the address into global unicast use. The authors believe that the critical filter in devices has now been lifted, and no longer persists at large in the BSD and Linux derived codebases. This echoes use of the address space which has been noted inside the Datacentre. Geoff has been measuring reachability at large to this address space, using the APNIC Labs measurement system and a prefix in 240.0.0.0/4 temporarily assigned and routed in BGP. The results were not encouraging, and Geoff thinks routability of the range remains a very high burden. Read more about 240/4 in the APNIC Blog, and the IETF Datatracker website: Looking for 240/4 addresses (Geoff Huston, APNIC Blog September 2024) Re-delegation of 240/4 from "future use" to "private use" (expired IETF draft, 2008) Unicast use of the formerly reserved 240/4 (active IETF draft, 2024)…
P
PING

1 DELEG - a proposed new way to manage DNS Delegation in-band 1:01:40
1:01:40
나중에 재생
나중에 재생
리스트
좋아요
좋아요1:01:40
In this episode of PING, APNICs Chief Scientist Geoff Huston discusses a new proposed DNS resource record called DELEG. The record is being designed to aid in managing where a DNS zone is delegated. Delegation is the primary mechanism used in the DNS to separate responsibility between child and parent for a given domain name. The DELEG RR is designed to address several problems, including a goal of moving to new transports for the name resolution service the DNS provides to all other Internet protocols. Additionally, Geoff believes it can help with cost and management issues inherent in out-of-band external domain name management through the registry/registrar process, bound in the whois system and in a protocol called Extensible Provisioning Protocol or EPP. There are big costs here and they include some problems dealing with intermediaries who manage your DNS on your behalf. Unlike whois, EPP, and registrar functions, DELEG would be an in-band mechanism between the parent zone, any associated registry, and the delegated child zone. It’s a classic disintermediation story about improved efficiency and enables the domain name holder to nominate intermediaries for their services, via an aliasing mechanism that has until now eluded the DNS. Read more about DELEG on the APNIC Blog and on the IETF website. DNS and the proposed DELEG record (APNIC Blog) ‘ Extensible Delegation for DNS ‘ (IETF draft) Extensible Provisioning Protocol (EPP) (IETF RFC)…
This time on PING we have Amreesh Phokeer from the Internet Society (ISOC) talking about a system they operate called Pulse, available at https://pulse.internetsociety.org/ . Pulse’s purpose is to assess the “resiliency” of the Internet in a given locality. Similar systems we have discussed before on Ping include APNIC’s DASH service, aimed at resource holding APNIC members, and the MANRS project. Both of these take underlying statistics like resource distribution data, or measurements of RPKI uptake or BGP behaviours and present them to the community, and in the case of MANRS there’s a formalised “score” which shows your ranking against current best practices. The Pulse system measures resilience in four pillars: Infrastructure, Quality, Security and Market Readiness. Some of these are “hard” measures analogous to MANRS and DASH, but Pulse in addition to these kinds of measurements includes “soft” indicators like the economic impacts of design decisions in an economy of interest, the extent of competition, and less formally defined attributes like the amount of resiliency behind BGP transit. This allows the ISOC Pulse system to consider governance-related aspects of the development of Internet, and has a simple scoring model which allows a single health metric analogous to the use of pulse and blood pressure by a physician to assess your condition, but this time applied to the Internet. Read more about Pulse: The https://pulse.internetsociety.org/ website The Pulse Blog Don’t put all your internet infrastructure in one basket (Robbie Mitchell in the APNIC Blog) Internet Resilience on Pulse Internet Resilience Index Methodology…
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the role of DNS in directing where your applications connect to, and where content comes from. Although this more “steering” traffic than it “routing” in the strict sense of IP packet forwarding, (that’s still the function of the border gateway protocol or BGP) It does in fact represent a kind of routing decision, to select a content source or server logistically “best” or “closest” to you. So in the spirit of “Orange is the new Black” -DNS is the new BGP. As this change in delivery of content has emerged, the effective control on this kind of routing decision has also become more concentrated, into the hands of the small number of at-scale Content Distribution Networks (CDN) and associated DNS providers worldwide. This is far less than the 80,000 or so BGP speakers with their own AS and represents another trend to be thought about. How we optimise content delivery isn’t decided in common amongst us, its managed by simpler contractual relationships between content owner and intermediaries. The upside of course remains the improvement in efficiency of fetch for each client, the reduction in delay and loss. But the evolution of the Internet over time and the implications for governance in “steering” decisions is going to be of increasing concern. Read more about Geoff’s views of Concentration in the Internet, Governance, and Economics on the APNIC Blog and at APNIC Labs : DNS is the new BGP Internet Governance in 2023 On Internet Centrality and Fragmentation The Internet as a Public Utility An Economic Perspective on Internet Centrality Looking at Centrality in the DNS…
In this episode of PING, Leslie Daigle from the Global Cyber Alliance (GCA) discusses their honeynet project, measuring bad traffic internet-wide. This was originally focussed on IoT devices with the AIDE project but is clearly more generally informative. Leslie also discusses the quad-nine DNS service, GCA’s domain trust work and the MANRS project. Launched in 2014 with support from ISOC, MANRS now has a continuing relationship with GCA and may represent a model for the routing community regarding the ‘bad traffic’ problem which the AIDE project explores. Leslie has a long history of work in the public interest, as Chief Internet Technology Officer of the Internet Society, and with the IETF. She is currently the chair of the MOPS working group, has co-authored 22 RFCs and was chair of the IAB for five years. Read more about GCA, AIDE, domain trust and honeynets: The Global Cyber Alliance (GCA) The AIDE programme at GCA Domain Trust at GCA Honeynet tagged blog entries at APNIC…
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the change in IP packet fragmentation behaviour adopted by IPv6, and the implications of a change in IETF “Normative Language” regarding use of IPv6 in the DNS. IPv4 arguably succeeds over so many variant underlying links and networks because it’s highly adaptable to fragmentation in the path. IPv6 has a proscriptive requirement that only the end hosts fragment, which limits how intermediate systems can handle IPv6 data in flight. In the DNS, increasing complexity from things like DNSSEC mean the the DNS packet sizes are getting larger and larger, which risks invoking the IPv6 fragmentation behaviour in UDP. This has consequences for the reliability and timeliness of the DNS service. For this reason, a revision of the IETF normative language (the use of capitalised MUST MAY SHOULD and MUST NOT) directing how IPv6 integrates into the DNS service in deployment has risks. Geoff argues for a “first, do no harm” approach to this kind of IETF document. Read more about IPv6, Fragmentation, the DNS and Geoff’s measurements on the APNIC Blog and APNIC Labs: IPv6, the DNS and Happy Eyeballs How we measure DNSSEC Validation DNS is the new BGP To DNSSEC or Not…
In this episode of PING, Sara Dickinson from Sinodun Internet Technologies and Terry Manderson , VP, Information Security and Network Engineering at ICANN discuss the ICANN DNS stats collector system which ICANN commissioned, and Sinodun wrote for them. This system consists of two parts, a DNS stats compactor framework which captures data in the C-DNS format, a specified set of data in CBOR format, and the DNS stats visualiser which is uses Grafana. The C-DNS format is not a complete packet capture but allows the recreation of all the DNS context of the query and response. It was standardised in 2019, in an RFC authored by Sara, her partner John, Jim Hague, John Bond and Terry. Unlike DSC , which is a 5 minute sample aggregation system, this system is able to preserve a significantly larger amount of the seen DNS query information and can even be used to re-create an on-the-wire view of the DNS (albiet not 1 to 1 identical to the original IP packetflows) Read more about the systems, and IMRS online: RFC8618 Compacted-DNS (C-DNS): A Format for DNS Packet Capture The ICANN github repository for DNS Stats ICANN Managed Root Server (IMRS)…
P
PING

1 Low Earth Orbit and the TCP congestion control problem 1:16:40
1:16:40
나중에 재생
나중에 재생
리스트
좋아요
좋아요1:16:40
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the rise of Low Earth Orbiting (LEO) Satellite based Internet, and the consequences for end-to-end congestion control in TCP and related protocols. Modern TCP has mostly been tuned for constant delay, low loss paths and performs very well at balancing bandwidth amongst the cooperating users of such a link, achieving maximum use of the resource. But a consequence of the new LEO internet is a high degree of variability in delay, loss and consequently an unstable bandwidth, which means TCP congestion control methods aren’t working quite as well in this kind of Internet. A problem is, that with the emergence of TCP bandwidth estimation models such as BBR, and the rise of new transports like QUIC (which continue to use the classic TCP model for congestion control), we have a fundamental mismatch in how competing flows try to share the link. Geoff has been exploring this space with some tests from starlink home routers, and models of satellite visibility. His Labs starlink page shows a visualisation of behaviour of the starlink system, and a movie of views of the satellites in orbit. Read more about TCP, QUIC, LEO and Geoff’s measurements on the APNIC Blog and APNIC Labs: APNIC Labs measurements of Starlink. (2023, Geoff Huston) Comparing TCP and QUIC (November 2022, Geoff Huston) Testing LEO and GEO Satellite Services in Australia (May 2022, Geoff Huston) Transport Protocols and the Network (May 2021, Geoff Huston) Congestion Control at IETF110 (March 2021, Geoff Huston)…
In this episode of PING, Verisign fellow Duane Wessels discusses a late state (version 08) Internet draft he’s working on with two colleagues from Verisign. The draft is on Negative Caching of DNS Resolution Failures and is co-authored by Duane, William Carroll , and Matt Thomas This episode discusses the behaviour of the DNS system overall in the face of failures to answer. There are already mechanisms to deny the existence of a queried name or a specific resource type. There are also mechanisms to define how long this negative answer should be cached, just as there are cache lifetimes defined for how long to hold valid answers, things that do exist, and have been supplied. This time, it’s a cache of not being able to answer. The thing asked about? It might exist, or it might not. This cached data isn’t saying if it does exist or not, it’s a caching failure to be able to answer. As the draft states: “… a non-response due to a resolution failure in which the resolver does not receive any useful information regarding the data’s existence.” Prior DNS specifications did provide guidance on caching in the context of positive responses and negative responses but the only guidance relating to failing to answer was to avoid aggressive re-querying of the nameservers that should be able to answer. Read more about the draft, and other DNS-related work by Duane on the APNIC Blog: The draft Negative Caching of DNS Resolution Failures (2023, Version 08) Adding ZONEMD protections to the root zone (2023, APNIC Blog post) [Podcast] Adding ZONEMD protections to the root zone (2023, related podcast on PING) [Podcast] A look back at notable root zone changes (Duane discusses three significant root zone changes over the last decade)…
In this episode of PING, instead of a conversation with APNIC’s Chief Scientist Geoff Huston we’ve got a panel session from APNIC56 he facilitated, where Geoff and six guests got to discuss the 30 year history of APNIC. With Geoff on the panel were: Professor Jun Murai known as the ‘father of the Internet’ in Japan. In 1984, he developed the Japan University UNIX Network (JUNET), the first-ever inter-university network in that nation. In 1988, he founded the Widely Integrated Distributed Environment ( WIDE ) Project, a Japanese Internet research consortium, for which he continues to serve as a board member. Along with Geoff, Jun was one of the main progenitors of what became APNIC. Elise Gerich , a 31 year veteran of Internet networking, is recognised globally for her significant contributions to the Internet. Before retiring, Elise was President of PTI and prior to that, Vice President of IANA at ICANN. Elise served as the Associate Director National Networking at Merit Network in Michigan. While at Merit she was also a Principal Investigator for NSFNET’s T3 Backbone Project and the Routing Arbiter Project and was responsible for much of the early address management Impetus which led to the creation of the RIR system. David Conrad Previously the Chief Technology Officer of ICANN, who was involved in the creation of APNIC as its first full-time employee and founding Director-General. Akinori Maemura the JPNICChief Policy Officer, and a member of the APNIC EC for 16 years, 13 of which he was Chair of the EC. Gaurab Raj Upadhaya Head of WWW Video Delivery Strategy, Prime Video at Amazon. Gaurab has been active in the Internet community for more than a decade and like Akinori served on the APNIC EC for 12 years, 7 of these as Chair of the EC. Paul Wilson has more than thirty years’ involvement with the Internet, including 25 years’ experience as the Director General of APNIC. The Panel discussed the early years of the Internet and the processes which led to the creation of APNIC along with some significant moments in the life of the registry.…
P
PING

In this episode of PING, Stephen Song discusses his work mapping the Internet. This is a long-term project, which he carries out alongside and supported by Mozilla Corporation , and the Association for Progressive Communications ( APC ). Stephen has long championed the case for Open Data in telecommunications decision-making and maintains a list of resources for capacity building and development of the Internet with a particular focus on Africa. The combination of some opaque business practices and the change from end delivery to mediated proxies from the content distribution network model raises questions about where the things users engage with and depend on are, so network infrastructure can be efficiently and openly planned. The latest episode of PING explores the issues inherent in understanding ‘where things are’ in the modern Internet. Explore Stephen’s resources: Many Possibilities website Connectivity indexes, maps, and reports (GitHub) Open Data map of Content Distribution Networks around the world After Fibre Village Telco…
P
PING

In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the technique APNIC Labs uses to measure end user behaviour in the global internet. This is probably the only worldwide web advert based measurement system in continuous use since 2010. Originally written in Adobe Flash, the system is now coded in Javascript and HTML5, and continuously samples as many as 25 million users per day, across mobile devices and desktop PCs, Android, iPhone and Chromebook. The system was first designed to inform the community on the rate of IPv6 deployment. The APNIC Labs measurements now encompass IPv6, RTT, HTTP/3 (Quic) adoption, DNSSEC, use of public DNS resolvers, IPv6 EH support, RPKI validation amongst other measurements. Data is available at a per-economy, and per-AS (origin-AS) level, both as a web view and as JSON downloads. No end user identifying material is held, or distributed in any way. The measurement program is generously supported by Google, ICANN and APNIC. Read more about some recent research outcomes from the labs advert on the APNIC Blog: Measuring the use of DNSSEC (September 2023, Geoff Huston) Measuring NXDOMAIN responses (July 2023, Geoff Huston) A Further Update on IPv6 Extension Headers (June 2023, Geoff Huston) A second look at QUIC use (September 2022, Geoff Huston)…
In june of this year, the Dashboard for AS Health or DASH, a service operated by APNIC saw a leak of approximately 260,000 BGP routes from a vantage point in Singapore, and sent alerts to around 90 subscribers to our routing mis-alignment notification service which is part of DASH. BGP is the state of announcements made and heard worldwide, calculated by every BGP speaker for themselves and although its globally connected and represents “the same” network, not everyone sees all things, as a result of filtering and configuration differences around the globe. BGP also should align with two external information systems, the older Internet Routing Registry (IRR) system which uses a notation called RPSL to represent routing policy data, including the “route” object, and Resource Public Key Infrastructure or RPKI, which represents the origin-AS (in BGP, who originates a given prefix) in a cryptographically signed objected called a ROA. The BGP prefix and origin (the route) should align with whats in an IRR route object and an RPKI ROA, but sometimes these disagree. Thats what DASH is designed to do: tell you when these three information sources fall out of alignment. I discussed this incident, and the APNIC Information Product family (DASH, a collaboration with RIPE NCC called NetOX, and the delegation statistics portal called REX) with Rafael Cintra , the product manager of these systems, and with Dave Phelan who works in the APNIC Academy and has a background in Network Routing Operations. You can find the APNIC Information products here: (note that the DASH service needs a MyAPNIC login to be used) https://dash.apnic.net the DASH portal login page (MyAPNIC resource login needed) https://netox.apnic.net NetOX the Network Observatory web service https://rex.apnic.net Resource Explorer: delegation statistics for the world And you can read about the Information Products family in these blog articles: New Alert Options for DASH Routing Status added to DASH Suspicious Traffic Alerts added to DASH Using DASH to rank economies by suspicious traffic How DASH helps monitor Network Health Worldwide REX Introducing REX a new approach for the internet directory Hands-On with APNIC’s NetOX…
I n this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the coming future of VLSI with Moores law coming to an end. This was motivated by a key presentation made at the most recent ANRW session at IETF117, San Francisco. For over 5 decades we have been able to rely on an annual, latterly bi-annual doubling of speed called Moore's Law , and halving of size of the technology inside a microchip: Very Large Scale Integration (VLSI), the basic building block of the modern age being the transistor. From it's beginnings off the back of the diode, replacing valves but still discrete components, to the modern reality of trillions of logic "gates" on a single chip, everything we have built in recent times which includes a computer, has been built under the model "it can only get cheaper next time round" -But for various reasons explored in this episode, that isn't true any more, and won't be true into the future. We're going to have to get used to the idea it isn't always faster, smaller, cheaper, and this will have an impact on how we design Networks, including details inside the protocol stack which go to processing complexity forwarding those packets along the path. A few times, Both Geoff and myself get our prefixes mixed up and may say millimeters for nanometers or even worse on air. We also confused the order of letters in the company Acronym TSMC -The Taiwan Semiconductor Manufacturing Company . Read more about the end of Moore's law on APNIC Blog and the IETF: Chipping Away at Moore's Law (August 2023, Geoff Huston) It’s the End of DRAM As We Know It (July 2023, Philip Levis, IETF117 ANRW session)…
P
PING

1 Here comes the sun(spots) — what are the real risks in solar storms? 45:21
45:21
나중에 재생
나중에 재생
리스트
좋아요
좋아요45:21
In this episode of PING Jaap Akkerhuis (NLNet Labs) , Ulrich Spiedel (University of Auckland) and Russ White (Juniper) discuss the issues behind Sunspots, ionisation in the atmosphere and its effects on satellite communications and terrestrial infrastructure based on wires in the air: Power grids and data services. In two blogs Good day sunshine and Solar Storms and the Internet we've highlighted the potential risks from increases in solar activity such as solar flares and the associated Coronal Mass Ejection or CME . Spectacular as the effects on earths atmosphere can be, The risk of these events is quite high, if things line up badly for us: It's possible for there to be compounding effects on Satellite systems orbit, their electrical components, their lifetime in orbit (due to repositioning costs burning fuel to cope with the event) as well as effects on land as the suspended wires in power grids and data communications act as antenna, and produce voltage "spikes" to attached equipment at the end, as well as along the path. However, as explored in this episode of PING the situation is often overblown by the news cycle, and it's more a story about being prepared with resilience in systems exposed to risk and understanding those risks. Read more about solar storms and their impact on infrastructure, satellite communications and space weather: Good day, sunshine (George Michaelson, May 2023) Solar storms and the Internet (Ulrich Spiedel, July 2021) APNIC Blog articles about Satellite Communications The Space Weather website (as mentioned by Jaap in the podcast)…
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the eternal tension between content and carriage. At the RIPE 86 meeting held in Rotterdam in May of this year, Rudolf van der Berg presented a talk titled "The EU Gigabit Connectivity Package and How It Will Hurt the Internet" Geoff has looked at the tensions between content and carriage, Transit and CDNs, the economics of networks for decades, and a conversation about the problems has gone on for some time now, some of which repeats here, but with a new twist: some inside information from Vodafone about their underlying cost and price issues which perhaps undermine the basis of the complaint from the European operator community to the EU seeking regulation of the "cost" side of carrying the content domestic consumers seek. Read more about the economics of the Internet on the APNIC Blog: RIPE 86 bites - Gigabits for EU (June 2023, Geoff Huston on this RIPE86 presentation) On Internet Centrality and Fragmentation (July 2023, Geoff Huston) The Internet as a Public Utility (May 2023, Geoff Huston) An Economic Perspective on Internet Centrality (March 2023, Geoff Huston) Sender Pays (September 2022, Geoff Huston) Content Vs Carriage: Who Pays? (June 2022, Geoff Huston) Watch Rudolph Van Der Berg's talk at RIPE86 , or read his slides (pdf)…
P
PING

1 Focusing purely on technology limits the understanding of Internet resilience 34:05
34:05
나중에 재생
나중에 재생
리스트
좋아요
좋아요34:05
In this episode of PING, Nowmay Opalinski from the French Institute of Geopolitics at Paris 8 University discusses his work on resilience, or rather the lack of it, confronting the Internet in Pakistan. As discussed in his blog post , Nowmay and his colleagues at the French Institute of Geopolitics (IFG), University Paris 8 , and LUMS University Pakistan used a combination of technical measurement from sources such as RIPE Atlas , in a methodology devised by the GEODE project, combined with interviews in Pakistan, to explore the reasons behind Pakistan’s comparative fragility in the face of seaborne fibre optical cable connectivity. The approach deliberately combines technical and social-science approaches to exploring the problem space, with quantitative data and qualitative interviews. Located at the head of the Arabian Sea, but with only two points of connectivity into the global Internet, Pakistan has suffered over 22 ‘cuts’ to the service in the last 20 years, However, as Nowmay explores in this episode, there actually are viable fibre connections to India close to Lahore, which are constrained by politics. Nowmay is completing a PhD at the institute, and is a member of the GEODE project . His paper on this study was presented at the 2024 AINTEC conference held in Sydney, as part of ACM SIGCOMM 2024 . Read more about GEODE, and Nowmay’s work: The GEODE project Pakistan, a case study in Internet fragility The Quest for a Resilient Internet Access in a Constrained Geopolitical Environment (AINTEC 2024 Paper)…
In his regular monthly spot on PING, APNIC’s Chief Scientist, Geoff Huston , discusses another use of DNS Extensions: The EDNS0 Client Subnet option ( RFC 7871 ). This feature, though flagged in its RFC as a security concern, can help route traffic based on the source of a DNS query. Without it, relying only on the IP address of the DNS resolver can lead to incorrect geolocation, especially when the resolver is outside your own ISP’s network. The EDNS Client Subnet (ECS) signal can help by encoding the client’s address through the resolver, improving accuracy in traffic routing. However, this comes at the cost of privacy, raising significant security concerns. This creates tension between two conflicting goals: Improving routing efficiency and protecting user privacy. Through the APNIC Labs measurement system , Geoff can monitor the prevalence of ECS usage in the wild. He also gains insights into how much end-users rely on their ISP’s DNS resolvers versus opting for public DNS resolver systems that are openly available. Read more about EDNS0 and UDP on the APNIC Blog and at APNIC Labs: Privacy and DNS Client Subnet (Geoff Huston, APNIC Blog July 2024) The use of ECS as measured by APNIC Labs…
In this episode of PING, Joao Damas from APNIC Labs explores the mechanics of the Labs measurement system. Commencing over a decade ago, with an "actionscript" (better known as flash) mechanism, backed by a static ISC Bind DNS configuration cycling through a namespace, the Labs advertising measurement system now samples over 15 million end users per day, using Javascript and a hand crafted DNS system which can synthesise DNS names on-the-fly and lead users to varying underlying Internet Protocol transport choices, packet sizes, DNS and DNSSEC parameters in general, along with a range of Internet Routing related experiments. Joao explains how the system works, and the mixture of technologies used to achieve the goals. There's almost no end to the variety of Internet behaviour which the system can measure, as long as it's capable of being teased out of the user in a javascript enabled advert backed by the DNS! Measurements from APNIC Labs How we measure: RPKI ROA and ROV (2023) How we measure: DNSSEC Validation (2023) The APNIC Labs IPv6 Measurement system (2013)…
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston re-visits the question of DNS Extensions, in particular the EDNS0 option signalling maximum UDP packet size accepted, and it’s effect in the modern DNS. Through the APNIC Labs measurement system Geoff has visibility of the success rate for DNS events where EDNS0 signalling triggers DNS “truncation” and the consequent re-query in TCP as well as the impact of UDP fragmentation even inside the agreed limit, as well as the ability to handle the UDP packet sizes proffered in the settings. Read more about EDNS0 and UDP on the APNIC Blog and at APNIC Labs Revisiting DNS and UDP truncation (Geoff Huston, APNIC Blog July 2024) DNS TCP Requery failure rate (APNIC Labs)…
In this episode of PING, Caspar Schutijser and Ralph Koning from SIDN Labs in the Netherlands discuss their post-quantum testbed project. As mentioned in the previous PING episode about Post Quantum Cryptography (PQC) in DNSSEC with Peter Thomassen from SSE and Jason Goertzen from Sandbox AQ it's vital we understand how this technology shift will affect real-world DNS systems in deployment. The SIDN Labs system has been designed to be a "one stop shop" for DNS operators to test configurations of DNSSEC for their domain management systems, with a complete virtualised environment to run inside. It's fully scriptable so can be modified to suit a number of different situations and potentially include builds of your own critical software components to include with the system under test. Read more about the testbed and PQC on the APNIC Blog and at SIDN Labs: PATAD: The SIDN Labs post-quantum cryptography DNSSEC testbed [Podcast] Testing Post Quantum Cryptography DNSSEC A quantum-safe cryptography DNSSEC testbed How organizations can prepare for post-quantum cryptography…
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston continues his examination of DNSSEC. In the first part of this two-part story, Geoff explored the problem space, with a review of the comparative failure of DNSSEC to be deployed by zone holders, and the lack of validation by the resolvers. This is visible to APNIC labs from carefully crafted DNS zones with validly and invalidly signed DNSSEC states, which are included in the Labs advertising method of user measurement. This second episode offers some hope for the future. It reviews the changes which could be made to the DNS protocol, or use of existing aspects of DNS, to make DNSSEC safer to deploy. There is considerable benefit to having trust in names, especially as a "service" to Transport Layer Security (TLS) which is now ubiquitous worldwide in the web. Read more about DNSSEC and TLS on the APNIC Labs website and the APNIC Blog: Calling time on DNSSEC (Geoff Huston, APNIC Blog, June 2024) 'Keytrap' attacks on DNSSEC (Geoff Huston, APNIC Blog, June 2024) DNS topics at RIPE 88 (Geoff Huston, APNIC Blog, June 2024) The Tranco list DNSSEC validation client usage (APNIC Labs) DNSSEC-enabled domains from Cloudflare public DNS (APNIC Labs)…
This time on PING, Peter Thomassen from deSEC and Jason Goertzen from Sandbox AQ discuss their research project on post quantum cryptography in DNSSEC, funded by NLNet Labs. Post Quantum cryptography is a response to the risk that a future quantum computer will be able to implement Shor's Algorithm -a mechanism to uncover the private key in the RSA public-private key cryptographic mechanism, as well as Diffie-Hellman and Elliptic Curve methods. This would render all existing public-private based security useless, because with knowledge of the private key by a third party, the ability to sign uniquely over things is lost: DNSSEC doesn't depend on secrecy of messages but it does depend on RSA and elliptic curve signatures. We'd lose trust in the DNSSEC protections the private key provides. Post Quantum Cryptography (PQC) addresses this by implementing methods which are not exposed to the weakness that Shor's Algorithm can exploit. But, the cost and complexity of these PQC methods rises. Peter and Jason have been exploring implementations of some of the NIST candidate post quantum algorithms, deployed into bind9 and PowerDNS code. They've been able to use the Atlas system to test how reliably the signed contents can be seen in the DNS and have confirmed that some aspects of packet size in the DNS, and new algorithms will be a problem in deployment as things stand. As they note, it's too soon to move this work into IETF DNS standards process but there is a continuing interest in researching the space, with other activity underway from SIDN which we'll also feature on PING.…
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston discusses DNSSEC and it's apparent failure to deploy at scale in the market after 30 years: Both as the state of signed zone uptake (the supply side) and the low levels of verification seen by DNS client users (the consumption side) there is a strong signal DNSSEC isn't making way, compared to the uptake of TLS which is now ubiquitous in connecting to websites. Geoff can see this by measurement of client DNSSEC use in the APNIC Labs measurement system, and from tests of the DNS behind the Tranco top website rankings. This is both a problem (the market failure of a trust model in the DNS is a pretty big deal!) and an opportunity (what can we do, to make DNSSEC or some replacement viable) which Geoff explores in the first of two parts. A classic "cliffhanger" conversation about the problem side of things will be followed in due course by a second episode which offers some hope for the future. In the meantime here's the first part, discussing the scale of the problem. Read more about DNSSEC and TLS on the APNIC Labs website and the APNIC Blog: Calling time on DNSSEC (Geoff Huston, APNIC Blog June 2024) "Keytrap" attacks on DNSSEC (Geoff Huston, APNIC Blog June 2024) DNS topics at RIPE88 (Geoff Huston, APNIC Blog June 2024) The Tranco top website Rankings DNSSEC validation client usage (APNIC Labs) DNSSEC enabled domains from Cloudflare public DNS (APNIC Labs)…
This time on PING, Philip Paeps from the FreeBSD Cluster Administrators and Security teams discusses their approach to systems monitoring and measurement. Its eMail. “Short podcast” you say, but no, there’s a wealth of war-stories and “why” to explore in this episode. We caught up at the APNIC57/APRICOT meeting held in Bangkok in February of 2024. Philip has a wealth of experience in systems management and security and a long history of participation in the free software movement. So his ongoing of support of email as a fundamental measure of system health isn’t a random decision, it’s based on experience. Mail may not seem like the obvious go-to for a measurement podcast, but Philip makes a strong case that it’s one of the best tools available for a high-trust measure of how systems are performing, and in the first and second order derivative can indicate aspects of velocity and rate of change of mail flows, indicative of the continuance or change in the underlying systems issues. Philip has good examples of how Mail from the FreeBSD cluster systems indicates different aspects of systems health. Network delays, disk issues. He’s realistic that there are other tools in the armoury, especially the Nagios and Zabbix systems which are deployed in parallel. But from time to time, the first best indication of trouble emerges from a review of the behaviour of email. A delightfully simple, and robust approach to systems monitoring can emerge from use of the fundamental tools which are part of your core distribution. Read more about Philip, FreeBSD, Zabbix and Nagios at their websites: FreeBSD Project home page The FreeBSD Foundation welcomes donations! The FreeBSD Project and Administration Philip’s home page Zabbix for systems and network monitoring Nagios for systems and network monitorin g…
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston discusses the question of subnet structure, looking into the APNIC Labs measurement data which collects around 8 million discrete IPv6 addresses per day, worldwide. Subnets are a concept which "came along for the ride" in the birth of Internet Protocol, and were baked into the address distribution model as the class-A, class-B and class-C subnet models (there are also class-D and class-E addresses we don't talk about much). The idea of a sub-net is distinct from a routing network, many pre-Internet models of networking had some kind of public-local split, but the idea of more than one level of structure in what is "local" had to emerge when more complex network designs and protocols came into being. Subnets are the idea of structure inside the addressing plan, and imply logical and often physical separation of hosts, and structural dependency on routing. There can be subnets inside subnets, its "turtles all the way down" in networks. IP had an ability out-of-the-box to permit subnets to be defined, and when we moved beyond the classful model into classless inter-domain routing or CIDR, the idea of prefix/length models of networks came to life. But IPv6 is different, and the assumption we are heading to a net-subnet-host model of networks may not be applicable in IPv6, or in the modern world of high speed complex silicon for routing and switching. Geoff discusses an approach to modelling how network assignments are being used in deployment, which was raised by Nathan Ward in a recent NZNOG meeting. Geoff has been able to look into his huge collection of IPv6 addresses and see what's really going on. Read more about networks and subnets and address policy on the APNIC Web and blog APNIC's current address policy RFC4632 Classless Inter-Domain Routing (CIDR) (IETF RFC) IPv6 Prefix Lengths (Geoff Huston, blog article)…
This time on PING Doug Madory from Kentik discusses his recent measurements of the RPKI system worldwide, and it's visible impact on the stability and security of BGP. Doug makes significant use of the Oregon RouteViews repository of BGP data, a collection maintained continuously at the University of Oregon for decades. It includes data from back to 1997, originally collected by the NLANR/MOAT project and has archives of BGP Routing Information Base (RIB) dumps taken every two hours from a variety of sources, and made available in both human-readable and machine readable binary formats. This collection has become the de-facto standard for publicly available BGP state worldwide, along with the RIPE RIS collection . As Doug discusses, research papers which cite Oregon RouteViews data (over 1,000 are known of, but many more exist which have not registered their use of the data) invite serious appraisal because of the reproducibility of the research, and thus the testability of the conclusions drawn. It is a vehicle for higher quality science about the nature of the Internet through BGP. Doug presented on RPKI and BGP, at the APOPS session held in February at APRICOT/APNIC57 Bangkok, Thailand Read more about Doug's presentation, his measurements at Kentik, Oregon RouteViews, the state of BGP and RPKI on the Kentik website, and the APNIC Blog: RPKI ROV Reaches Major Milestone/ (APNIC Blog May 2024) Doug Madory's blog at Kentik Digging into the Orange España Hack (APNIC Blog January 2024) What can be learned from BGP hijacks targeting cryptocurrency services? (APNIC Blog November 2022) The University of Oregon RouteViews project website The RIPE Routing Information Service (RIS) website…
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses Starlink again, and the ability of modern TCP flow control algorithms to cope with the highly variant loss and delay seen over this satellite network. Geoff has been doing more measurements using starlink terminals in Australia and the USA, at different times of day exploring the system behaviour. Starlink has broken new ground in Low Earth Orbit internet services. Unlike Geosynchronous satellite services which have a long delay but constant visibility of the satellite in stationary orbit above, Starlink requires the consumer to continuously re-select a new satellite as they move overhead in orbit. In fact, a new satellite has to be picked every 15 seconds. This means there's a high degree of variability in the behaviour of the link, both between signal quality to each satellite, and in the brief interval of loss ocurring at each satellite re-selection window. Its a miracle TCP can survive, and in fact in the case of the newer BBR protocol thrive, and achieve remarkably high throughput, if the circumstances permit. This is because of the change from a slow start, fast backoff model used in Cubic and Reno to a much more aggressive link bandwidth estimation model, which continuously probes to see if there is more room to play in. Read more about Satellites, TCP and flow control algorithms on the APNIC Blog and on the IETF website. An explainer on Coherent Optical Transcievers (Geoff Huston, APNIC Blog 2024) Low Earth Orbit and the Congestion Control Problem (Geoff Huston, APNIC Blog 2023) APNIC Labs measurements of Starlink (APNIC Labs) Comparing TCP and QUIC (Geoff Huston APNIC Blog 2022) Testing LEO and GEO Satellite Services in Australia Transport Protocols and the Network Congestion Control at IETF 110…
This time on PING, Dr Mona Jaber from Queen Mary University of London (QMUL), discusses her work exploring IoT, Digital Twins and Social Science led research in the field of networking and telecommunications. Dr Jaber is a senior lecturer in QMUL and is the founder and director of the Digital Twins for Sustainable Development Goals (DT4SDG) at QMUL. She was one of the invited Keynote speakers at the recent APRICOT/APNIC57 meeting held in Bangkok, and the podcast explores the three major themes explored in her keynote presentation. The role of deployed fibre optic communication systems in measurement for sustainable green goals Digital Twin Simulation platforms for exploring the problem space Social Sciences led research, an inter-disciplinary approach to formulating and exploring problems which has been applied to Sustainable Development-related research through technical innovation in IoT, AI, and Digital Twins. The Fibre Optic measurement method is Distributed Acoustic Sensor or DAS: "DAS reuses underground fibre optic cables as distributed strain sensing where the strain is caused by moving objects above ground. DAS is not affected by weather or light and the fibre optic cables are often readily available, offering a continuous source for sensing along the length of the cable. Unlike video cameras, DAS systems also offer a GDPR-compliant source of data." The DASMATE Project at theengineer.co.uk This Episode of PING was recorded live in the venue and is a bit noisy compared to the usual recordings, but it's well worth putting up with the background chatter! Read more about Dr Jaber's presentation, the DAS system, Digital Twins and Fibre Optic communications: Intelligent IoT for sustainable development Goals : Keynote talk at APRICOT/APNIC57 The recording of Dr Jaber's Keynote talk The DASMATE project : Assisting the uptake of Active Travel Tower Hamlets, London The DT4SDG group page at QMUL Coherent Optical Tranceivers (Geoff Huston, April 2024)…
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the European Union's consideration of taking a role in the IETF, as itself. Network engineers, policy makers and scientists from all around the world have participated in IETF but this is the first time an entity like the EU has considered participation as itself in the process of standards development. What's lead to this outcome? What is driving the concern that the EU as a law setting and treaty body, an inter-governmental trade bloc needs to participate in the IETF process? Is this a mis-understanding of the nature of Internet Standards development or does it reflect a concern that standards are diverging from society's needs? Geoff wrote this up in a recent opinion piece on the APNIC Blog and the podcast is a conversation around the topic. Read more about digital sovereignty on the APNIC Blog and on the IETF website. Digital sovereignty and standards (Geoff Huston, APNIC Blog) As the Balance of Security Controls shifts where does responsibility rest? (Kathleen Moriarty, Guest Author on the APNIC Blog) Reflections on Ten Years Past the Snowden Revelations (IETF RFC9446) Pervasive Monitoring is an Attack (IETF RFC7528)…
This time on PING we have Phil Regnauld from DNS Operations Analysis & Resource Center (DNS-OARC) talking about the three distinct faces OARC presents to the community. Phil came to the OARC presidents role, replacing Keith Mitchell who was the founding president since 2008 through to this year. Phil previously has worked with the Network Startup Resource Centre (NSRC) and with AFNOG , and the Francophone Internet community at large. DNS OARC has at least 3 distinct faces. It is a community of DNS operators and researchers, who maintain an active ongoing dialogue face to face in workshops and online in the OARC Mattermost community hub. Secondly it is a home, repository and ongoing development environment for DNS related tools such as DNSVIZ (written by Casey Deccio) hosting the AS112 project, and development of the DSC systems amongst many other tools. Thirdly it is the organiser and host of the Day In The Life or DITL activity, the periodic collection of 48-72 hours of DNS traffic from the DNS root operators, and other significant sources of DNS traffic. Stretching back over 10 years DITL is a huge resource for DNS research, providing insights in the use of DNS and its behaviour on-the-wire. Read more about DNS OARC and its activities: The Domain Name Service Operations, Analysis and Research Center The DSC data collection and analysis system DNS OARC software tools catalog The Day In The Life (DITL) collection…
플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.