Explore presentation recordings and slides from our 2024 Summit speakers. Browse them all or filter by topic or brand.
Join us to discover how Airbnb harnessed the power of Gen AIs to enhance developer productivity. We will delve into the integration of large language models for code and test generation, as well as the transformative AI Copilot experience within IDEs. Explore the integration points throughout the developer journey that we found the most effective, and gain insights into selecting the best metrics for measuring the impact of AI tools. Let’s make engineering great again with AI!
Watch the video
This session covers the strategies Netflix uses to increase engineers’ comfort levels (confidence) with receiving and releasing code changes from peers and platform teams automatically (i.e. no human interaction). By investing in confidence-building, Netflix believes it can increase development velocity, improve quality, reduce exposure to security vulnerabilities, and better enable away-team models. Learn how Netflix shares validations and feedback loop data with developers and managers to help them identify where the greatest leverage can be achieved both at a small scale and in aggregate. Key themes include shifting left software verification, developer self-service insights, and failure impact analysis.
Watch the video
Karim will discuss theoretical and practical ways to measure and improve productivity, whether you’re early in your developer productivity journey or a seasoned expert. He will describe common pitfalls when it comes to measuring and surfacing productivity metrics on a dashboard. He will explain the problem with treating dashboards as the end result, and offer an alternative focused directly on productivity improvements.
Watch the video
Unlocking the productive potential of engineering teams goes beyond just software tools; it’s about shifting the focus back to the driving force behind it all: the developers themselves. Join us for an insightful discussion on how Microsoft leverages a blend of research techniques to understand the human factors impacting developer productivity. We will explore how AI is changing the way developers work, taking an honest look at where it is helping to improve the developer experience and where it isn’t. This session will explore the hopes and concerns developers have with AI head-on. We’ll explore the metrics Microsoft uses to assess developer productivity, explain why these metrics matter, and talk about whether our approach to metrics has changed due to the emergence of AI. How can teams increase the amount of time they have for uninterrupted, focused work? Does hybrid work really work? How much of a difference can a modern office really make on developer productivity? Join us to understand how to help your engineers thrive and to get answers to critical questions about the future of developer productivity in the age of AI.
Watch the video
Across all layers of the Software Development Life Cycle, Uber is investing in AI solutions to help developers “Ship Quality Faster”. Uber has formed a dedicated Developer Platform AI effort that spans many teams to deliver on that mantra. Adam and Ty will share the latest developments in Uber’s AI-driven developer productivity revolution. We’ll share the latest in the coding assistant landscape, including customizations to make them “monorepo aware”, how we’re thinking about large-scale code migrations with agentic systems, and how the test pyramid is being reshaped bottom-up with AI-powered code generation and top-down with probabilistic agents. You’ll leave with actionable strategies for implementing AI solutions in your own organization and a list of ideas for achieving rapid, high-impact results.
Watch the video
Diving into Android build optimizations, this talk revisits seemingly minor adjustments that hold untapped potential for speeding up your build process. Often overlooked or underestimated, these simple tweaks can be game-changers in enhancing build efficiency. I’ll share insights from my experience at Toast, where basic changes led to significant improvements, reminding you to give these solutions a second glance. Whether it’s fine-tuning Gradle properties, leveraging incremental builds, or optimizing resource usage, this session aims to highlight the often-skimmed solutions that might just be the key to a faster build. This session is ideal for developers looking to improve build times and enhance productivity, demonstrating that sometimes the most impactful optimizations are also the most accessible.
Watch the video
Modularization and reuse of modules, through some form of dependency management, are a central part of every larger software project. While most projects have well-defined modularity when they start off, they often end up in a chaotic setup – also referred to as “dependency hell” – after a few years of development. And all too often, there is no ambition to get out of that again until a project reaches an almost unmaintainable state. Not investing in this area earlier is usually a bad business decision. Issues in the modularity setup of a project have a negative impact on developer productivity in many ways that not only make the daily work of developers inefficient but also worsen the problems over time. In this session, we look at how these problems arise, the influence they have on developer productivity, and why they are so often invisible or ignored. In particular, we identify “accidental complexities” and separate them from “essential complexities” in this area. We then explore which tooling helps us to avoid the accidental complexities and deal with the essential complexities in a sustainable way. Based on this, we share ideas for future developer productivity tools and features that could be added to existing build and DPE tools like Gradle or Develocity. What we discuss in this session is based on experiences gained through helping multiple large Java projects get back to a maintainable modularity setup. Although we use Java, Gradle, and Develocity in examples, the concepts presented can be transferred to other languages and tools.
Watch the video
Gradle has a lot of performance advantages in comparison to Maven. But there are still several ways to speed up Maven builds. Simple: Upgrade hardware, upgrade JDK, use proper JDK for your arch (e.g. Apple Silicon) Parallel execution, “mvn -T1C” in .mvn/maven.config maven.test.skip=true (relates to get rid of test-jar dependencies) Kotlin K2 compiler Develocity Extension Middle: Stop deploying redundant artifacts on each build Split large modules to smaller (explanation why it makes sense, comparison with Gradle) Remove redundant dependencies Get rid of “test-jar” dependencies Extract code generation to jar dependency Complex: Get rid of AspectJ compiler (25% of compile time) Paid: Develocity with remote cache Additional topics: mvnd as way to visualize parallelization bottlenecks (alternate to Develocity or other plugins) IDEA: parallel compilation Eventually migrate to Gradle to have new opportunities. Some advice for Maven works for Gradle projects as well
Watch the video
Let’s face it: as developers, we dedicate a third of our time to code maintenance, which includes tasks such as upgrading dependencies, addressing security vulnerabilities, and removing obsolete code. This is tedious and repetitive. Neglecting regular maintenance can lead to costly outcomes, including unexpected crashes, and it makes the codebase more difficult to understand and evolve. However, automation of these tasks is not always straightforward. Existing tools such as security scanners and feature flag systems warn you about the issues or obsolete code, but fall short of automatically rectifying these problems. Tools that upgrade dependencies merely increase the version number, leaving engineers to handle any API compatibility issues. Automating code changes is hard, and the polyglot nature of modern development makes it harder. In this talk, we will delve into code rewriting techniques such as pattern matching, program analysis, and AI. We will illustrate how we leveraged the complementing power of these tools to generate over 1,800 automated pull requests, eliminating or refactoring more than 500,000 lines of code. In this talk, you will also learn how to harness the power of these tools to drive down tech debt, ensuring your codebase is not only functional but also future-proof.
Watch the video
Over the past year at Peloton, we’ve invested heavily in stabilizing and optimizing our complex build system, resulting in a build time reduction of over 50%. We’ll talk about the importance of observability, prioritizing stability, and optimizing for speed.
Watch the video
The Developer Platform team at Uber is consistently developing new and innovative ideas to enhance the developer experience and strengthen the quality of our apps. Quality and testing go hand in hand, and in 2023, we took on a new and exciting challenge: to change how we test our mobile applications, with a focus on machine learning (ML). Specifically, we are training models to test our applications just like real humans would. Mobile testing remains an unresolved challenge, especially at our scale, encompassing thousands of developers and over 3,000 simultaneous experiments. Manual testing is usually carried out but with high overhead and cannot be done extensively for every minor code alteration. While test scripts can offer better scalability, they are also not immune to frequent disruptions caused by minor updates, such as new pop-ups and changes in buttons. All of these changes, no matter how minor, require recurring manual updates to the test scripts. Consequently, engineers working on this invest 30-40% of their time on maintenance. Furthermore, the substantial maintenance costs of these tests significantly hinder their adaptability and reusability across diverse cities and languages (imagine having to hire manual testers or mobile engineers for the 50+ languages that we operate in!), which makes it really difficult for us to efficiently scale testing and ensure Uber operates with high quality globally. To solve these problems, we created DragonCrawl, a system that uses large language models (LLMs) to execute mobile tests with the intuition of a human. It decides what actions to take based on the screen it sees and its goals, and independently adapts to UI changes, just like a real human would. Of course, new innovations also come with new bugs, challenges, and setbacks, but it was worth it. We did not give up on our mission to bring code-free testing to the Uber apps, and towards the end of 2023, we launched DragonCrawl. Since then, we have been testing some of our most important flows with high stability, across different cities and languages, and without having to maintain them. Scaling mobile testing and ensuring quality across so many languages and cities went from humanly impossible to possible with the help of DragonCrawl. In the three months since launching DragonCrawl, we blocked ten high-priority bugs from impacting customers while saving thousands of developer hours and reducing test maintenance costs. In this talk, we will deep dive into our architecture, challenges, and results. We will close by touching a little on what is in store for DragonCrawl.
Watch the video
This talk discusses the challenge of determining what should be released in large-scale software development, such as at Meta’s scale. To address this, we developed models to determine the risk of a pull request (diff) causing an outage (aka SEV). We trained the models on historical data and used different types of gating to predict the riskiness of an outgoing diff. The models were able to capture a significant percentage of SEVs while gating a relatively small percentage of risky diffs. We also compared different models, including logistic regression, BERT-based models, and generative LLMs, and found that the generative LLMs performed the best.
Watch the video
Discover how Block’s Android DevEx team optimized CI performance in a large-scale CI system using advanced Gradle techniques and optimizing ephemeral worker setup. This talk will cover strategies for enhancing build speed and efficiency, managing thousands of simultaneous jobs, and improving CI infrastructure to handle extensive workloads.
Watch the video
AndroidX is a set of hundreds of libraries from dozens of separate teams. These libraries ship releases every couple of weeks from our monorepo. Join me to learn what we’ve done to enable shipping all of these libraries with a manageable amount of pain.
Watch the video
Flaky UI and unit tests have long been a bane for many mega repositories at Block. In this talk, we will share our journey of tackling flaky tests head-on through both offensive and defensive strategies. Learn how we implemented key changes to our engineering culture and processes to gain control over these elusive issues. Join us to discover practical insights and actionable steps to improve the reliability of your test suites at scale.
Watch the video
As the quantity of code grows, so does the complexity of managing it. Even if you do everything right, follow every best practice, never make a bad design decision, scale brings problems. Modern software projects have extensive lists of first and third-party dependencies and tools. Each dependency has its own usually unpredictable cadence of releases and vulnerability disclosures. As complexity increases, these costs scale exponentially. Software engineers must constantly strain against these tides or be swept away to obsolescence. In this talk, we will explore practical strategies and tools for automating the maintenance and modernization of builds across extensive codebases. We will dive into using OpenRewrite, a powerful set of tools for automating updates to code, data, and build logic. Using OpenRewrite, organizations can perform framework migrations, update dependencies, and integrate new tools such as Develocity, all while maintaining consistency and reliability across thousands of repositories. Discover the challenges faced, accomplishments achieved, and the path ahead for maintaining builds at scale. Join us to learn how OpenRewrite can transform your build maintenance practices, ensuring your projects remain robust, up-to-date, and ready for future developments.
Watch the video
Testing is usually associated with quality, but can it also improve productivity? Take a peek at how Uber is pushing the boundaries of testing with innovation and AI. Learn how Uber shifted end-to-end testing left to gate 50% of their code and config changes pre-land with thousands of tests resulting not only in a 70% reduction in outages caused by code change, but also increased developer productivity by preventing disruptive changes from ever landing into the code base.
Watch the video
Build Scan and Develocity are game changers in collecting data on key elements of developer productivity: building and testing code. This innovation lies in the seamless integration of data collected from continuous integration environments with data collected from developer environments. For the Gradle Build Tool project, this represents more than a million Build Scans retained in the public Develocity instance at https://ge.gradle.org. Some of that data is leveraged to show the different Develocity screens. But the system contains so much more data. What can we learn from it? What kind of analysis can we run on it? In this session, we will explore how the Gradle Build Tool Engineering team leveraged the data collected on https://ge.gradle.org to identify potential issues, measure the impact of changes, and confirm that they have a positive effect. What did we improve? What surprised us? Join this session to discover the answers to those questions.
Watch the video
In the tech industry, a common and critical challenge is managing ownership of assets, especially as teams and priorities change over time. This problem is amplified during outages or critical issues, where identifying the responsible party can consume a significant amount of time. To address this, LinkedIn developed the concept of Crews, which are organized groups responsible for maintaining key infrastructure and assets. Crews ensure clear accountability and ownership, independent of individual user names, accommodating the natural mobility of people who are more likely to change teams than companies. Our backend service integrates with Workday to understand the employee hierarchy, enabling dynamic team management and asset ownership handshaking. The frontend provides intuitive interfaces for managing these relationships, ensuring every asset has a clear owner. We started by enabling true ownership for 15,000 repositories and services, and are now scaling to manage tens of millions of assets across various types and groupings, making it a robust solution for any organization’s needs.
Watch the video
Set up a toolchain that allows you to efficiently gain actionable insights where you will get the most build reliability and acceleration improvements in return for your investment. This presentation will explain the toolchain of CI plugins to inject the DV configuration and capture all CI builds Using DRV to determine which projects require the most attention Using experiments, BVS, and other measures to perform the optimization and stabilization actions The primary focus will be on #2, DRV.
Watch the video
American Airlines, the world’s largest airline, has been working on developer experience for many years to allow developers to work in more efficient ways through a delightful developer platform. In 2020, they made the decision to have Backstage be the foundation of “Runway,” their developer platform and have grown a large plugin ecosystem around their expansive platform to enable delivery faster and safer. In this talk, they will discuss their initial hackathon strategy, engaging the Backstage community, UX flow, templates, improving feedback loops, and much more. By bringing together the pieces developers need to do their jobs, there is much less friction, developers spend more time writing code, and developer experience has improved.
Watch the video
A shorter pull request (PR) cycle time is essential for improving developer experience, but too often, pull requests are too complex, touch too many files, and require too many iterations to be quickly and thoroughly reviewed by a peer. The analysis of our data at Atlassian indicates that this results in longer PR cycles and release time. In this session, we’ll introduce the PR Complexity Score, how we calculate it, and how it helps identify PRs that should be reworked before being submitted for review. We’ll share how we make its value prominent and explain its meaning within the context of a pull request as part of a recent project we ran. Taking it a step further, we will illustrate how AI can assist by suggesting ways to simplify the changes. Achieving faster approval for PRs is possible, and optimizing release time will be beneficial for everyone!
Watch the video
See how Intuit instrumented its development processes to understand and then optimize its development speed and quality.
Watch the video
In this talk, we discuss Developer Productivity Engineering (DPE), and why and how more and more organizations, including Zalando, are investing heavily in this relatively uncharted discipline. We will begin our discussion with a systematic review of DPE approaches in the industry and provide insights into Zalando’s evidence-based approach to DPE Strategy. Finally, we will outline how we executed our app DPE strategy, resulting in significant app health and productivity wins.
Watch the video
This talk will highlight the view of cognitive psychologists on DP and DX. It aims to help DPEs in their decisions about people and processes, providing them with a short and useful theoretical framework taken from social and organizational psychology. In this talk, I will uncover how the senses of being autonomous, competent, and related to other people (a.k.a. The Self-Determination Theory’s three main pillars) influence satisfaction, efficiency, and communication dimensions, and thus overall developer productivity and experience. I will dissect the Self-Determination Theory and discuss concrete strategies to foster developers’ subjective experiences within your teams to boost their satisfaction and productivity based on comprehensive research data. Here are a few examples of how autonomy, competence, and relatedness to others manifest themselves in the everyday tasks of software developers.: Developers’ feeling of autonomy is higher when coding and lower when they are in meetings or writing emails. Developers’ feeling of competence drops when they are bugfixing When developers help colleagues, they experience higher levels of competence and relatedness to a team (Russo et al., 2023). Having in mind three core subjective feelings – feeling of autonomy, competence, and relatedness – when making decisions either about people problems or about tooling, will boost satisfaction and productivity in your engineering teams.
Watch the video
Effective decision-making in software organizations relies on good data: you can’t improve what you don’t measure. Current ways of measuring software org productivity are flawed and may encourage counterproductive behavior. We propose a new approach to measuring productivity, developed through years of Stanford research.
Watch the video
In this talk, we will describe how our team uses mixed-methods research to understand and measure developer productivity and provide a couple of examples of how our studies impacted decisions about developer tooling within Google.
Watch the video
We need observability on the path to production to gather the data to identify bottlenecks and friction in the tools and processes developers use. This means having visibility into developers’ local development environments and the staging environments (including CI) that the code goes through before finally being deployed to production. These environments are the production environments for creating software, and without visibility into what’s happening here, we don’t know what blockages or security issues are on there. What differentiates DPE from other related disciplines? It’s the “E” for “Engineering”. DPE uses engineering practices to identify and address issues like these. An engineering approach means: Formulating hypotheses Gathering data to support or reject the hypotheses Acting upon the data Having observability on our path to production is fundamental to gathering the data required for this approach, and even enables us to identify problems we weren’t aware of. During this keynote, Hans will show examples of how this works in practice.
Watch the video
Meta’s approach to a Productivity framework and our journey tying it to both business outcomes and developer happiness.
Watch the video
Join Abi Noda (CEO of DX) and Margaret-Anne Storey (co-author of DevEx and SPACE, University of Victoria) for a fireside chat that explores the evolution of developer productivity research. They’ll dive into the backstory of DevEx, SPACE, and the just-published DX Core 4—while sharing candid perspectives on current challenges.
Watch the video
The biggest threats to the long-term health of any development organization are brain-drain and burnout. Retaining the people who make your organization successful and keeping them functioning are the most critical objectives in productivity engineering. Yet to many companies, these ideas seem like an afterthought or a convenience rather than the critical components they are. Come with me as I explore a couple of the worst choices you can make in structuring your dev organization and what to do instead.
Watch the video
Tech on the Toilet is a weekly one-page publication about software development that is posted in bathrooms in Google offices worldwide and is read by tens of thousands of Google engineers. It is one of the most effective ways to quickly spread software development knowledge across Google. It covers topics such as code quality tips, unit testing best practices, and developer productivity tools. This talk will give an overview of the Tech on the Toilet program, and share lessons learned that can be applied at other companies.
Watch the video
Atlassian has invested significant effort in instrumenting, measuring, and learning how its 5000 engineers develop software. We constructed diagnostics (Vital Signs) and a principled approach (5R) to transform data into insights and context-aware recommendations. This session introduces our 5R framework that enables Atlassian to deliver the (R)ight insights to the (R)ight person at the (R)ight time in the (R)ight place to do the (R)ight thing. We will also cover Vital Signs, our diagnostic toolkit for detecting friction and bottlenecks, diagnosing productivity issues, and developing practices that improve the developer experience. Join us to learn how the partnership between Atlassian engineering & data science is used to understand developer productivity and generate actionable insights that help teams sustainably improve their engineering health.
Watch the video
The process of tracking down and resolving build failures is a big pain point for many software development teams. Developers are often not sure what the actual cause of an observed build failure is, or if the failure they are seeing was caused by their changes or part of a larger problem. Infrastructure teams with several problems to solve often struggle to identify which problems are most impactful and thus should be resolved before others. Answering these questions often requires someone manually sifting through a large amount of available failure messages and log text what the cause of the failure was, which can be ineffective and/or time consuming. There must be a better way to automatically parse through the noise and find the information we’re looking for. At Gradle we have been thinking a lot about how to better approach these problems using modern data science and AI modeling techniques. In this talk we’ll discuss our journey in researching this topic, and how our current methods are being used to improve Developer productivity within our organization.
Watch the video
In this lightning talk we will briefly look at the differences between software factories and software logistics and how they map to the overall software supply chain. We will demonstrate how you can use GitLab CI Steps to decompose your Gradle builds to ensure you are collecting the right data about the build before it becomes a package. We will show you how to leverage SemVer to create a production level package across different branching strategies, getting it ready for a deployment to production.
Watch the video
Check out a collection of top moments from the 2023 Developer Productivity Engineering Summit in San Francisco.
Watch the video
Explore presentation recordings and slides from our 2023 Summit speakers. You can browse them all or filter by topic and brand.
Hans explores the knife sharpening industry as a way of thinking about developer productivity—how we measure it and how we move it. He makes the case that for software development, just like for meat processing, collecting data and having the bandwidth and skills to interpret that data and apply learnings is the only way we can truly move the industry forward.
Watch the video
Abi at DX addresses ongoing challenges in developer productivity that persist despite the last 10-15 years of technological advancement. He emphasizes shifting from traditional metrics to staff-ranked productivity, aligning with outcome-focused DPE principles, and tackling issues like slow builds and context-switching.
Watch the video
Adam at Meta diverges from the technical aspects of DPE, delving into the role of psychology in decision-making. He analyzes developer and manager characteristics and emphasizes the value of laziness by avoiding unnecessary work (a key DPE principle). Sharing anecdotes, he emphasizes the value of saying “no”, showcases different manager archetypes, and explains how to enhance personal productivity for organizational recognition.
Watch the video
Alex at Aspect looks into the first Bazel users who reported 3-10x speed-ups. Who are they? It turns out most users didn’t experience this out of the box. His talk touches on specific optimization aspects of increasing developer productivity with Bazel, including non-persistent workers, low cache hit rates, network and cluster issues, and the dark side of remote execution.
Watch the video
Alexander at JetBrains highlights the importance of partnerships when it comes to customer success—he focuses on building trust and transparency and using surveys and data to make incremental improvements to their offerings. JetBrains and DPE both rely on partnerships for success, while emphasizing the key drivers of transparency, communication, and data-driven decision-making.
Watch the video
Ali from Uber describes how their 200-person Developer Platform team drives efficiency across 100k+ monthly deployments while prioritizing developer satisfaction. Ali’s keynote stresses the significance of DPE tools—like performance acceleration technologies for faster feedback cycles and ML-driven automated testing for driver-rider interactions—showcasing its universal benefits, even for companies not at Uber’s scale.
Watch the video
If you’re interested in faster tests, flaky test detection/remediation, remote test execution, and predictive test selection, this talk is for you. Pro Tip: How they rolled out Develocity’s Predictive Test Selection AI/ML technology to save 107 days of test execution time in the first month is quite interesting.
Watch the video
Aurimas shares his Android on-device testing tips, including what you should avoid to run more effective tests. He shares the AndroidX case study that covers to how to keep their continuously growing test suite fast.
Watch the video
Brian from the Jamf DPE team shares how he measured the impact of codebase growth on build times and developer productivity. Brian also shares how Develocity’s Predictive Test Selection reduced unit test time by 36% and integration test time by 39%. If you’re interested in build/test performance acceleration, this talk is for you.
Watch the video
Etienne from the Develocity (formerly Gradle Enterprise) engineering team shares how you can use the latest Develocity build/test observability feature—build validation scripts—to monitor build cache misses across many projects. You can identify which of your projects had build cache misses, the number of misses, and the amount of engineering time lost. He also explains how to generate a fast link to the Build Scan UI to investigate and fix problems.
Watch the video
Etienne from the Develocity (formerly Gradle Enterprise) engineering team shows how you can use Develocity to capture CI build/test data from many projects to identify productivity bottlenecks. He shares how to use the Develocity telemetry and API data to surface and prioritize DPE initiatives. Pro tip: If you’re interested in DPE build/test metrics, query and visualization with AWS Athena and Grafana is quite interesting.
Watch the video
Gabriel from iFood shares how they used the Develocity API to capture build/test metrics across their projects and builds. They used these metrics to create reports and dashboards to monitor flaky tests, and then used the insights to get aggregate reports around hard-to-find bottlenecks. How they use the Devolocity API for monitoring flaky tests and automating the creation of test tickets is pretty cool.
Watch the video
Henry from the Apple Maps team shares how they solve dependency hell, at scale. If you’re interested in SBOM (software bill of materials), Dependency analysis/graphs, and DPE for microservices, this is the talk for you. Pro Tip: The automated dependency updates across many projects is especially interesting.
Watch the video
Learn more about the CI team behind one of the largest Android application teams on the planet. Inez shares Block’s techniques for UI test avoidance. This includes decompiling the apk/test apks and taking hash of those, and when they determine that they already tested those combinations, they skip the UI tests. This technique minimizes the set of CI shards that need to run and resulted in 50% of shards skipped.
Watch the video
Laurent explores the challenge of aligning metrics between developers and executives. Leveraging self-reported developer productivity metrics, his team at Spotify distinguishes between leading metrics (short-term actions) and lagging metrics (long-term impact)—with the goal of connecting actions to long-term gains and avoiding “vanity” productivity measures.
Watch the video
Manuel shares how the Pinterest mobile team tracks local/CI build times, CI up time, and other build metrics to measure the state of builds. They label builds with a number and letter grade to determine build health.
Watch the video
Learn more about the CI team behind one of the largest Android application teams on the planet. Inez shares Block’s techniques for UI test avoidance. This includes decompiling the apk/test apks and taking hash of those, and when they determine that they already tested those combinations, they skip the UI tests. This technique minimizes the set of CI shards that need to run and resulted in 50% of shards skipped.
Watch the video
If you’re interested in faster tests, flaky test detection/remediation, remote test execution, and predictive test selection, this talk is for you. Pro Tip: How they rolled out Develocity’s Predictive Test Selection AI/ML technology to save 107 days of test execution time in the first month is quite interesting.
Watch the video
Rui from Meta navigates the complexities of measuring developer productivity. He critiques DORA and SPACE metrics, suggesting a three-pronged approach: Velocity, Reliability, and Code Readability. He explores academic research on test productivity, dead code removal, and defining code readability.
Watch the video
The JPMC toolchain team reveals the DPE challenges of supporting 40k+ developers across 100k+ repos. They share how they measure CI quality of service in terms of predictability, reliability, and developer experience, and how they implemented a developer experience platform.
Watch the video
Szczepan at Airbnb asks: Does AI actually make our developers more productive? His team set out to test this theory by trying different developer productivity use-cases with AI tools like ChatGPT and GitHub Copilot. Seeing some early successes, they created their own custom AI model called One Chat, specifically designed to help Airbnb developers be more productive.
Watch the video
Ty at Uber shares real-life stories that most developers can relate to, as well as lesson learned the hard way: a crash in third-party code means a crash in your app. Ty shows how his team balances risk and reward with actual code examples that Uber uses to prevent bugs, dependency conflicts, and other issues before they can exact a real toll.
Watch the video
For Adrian and Bartosz at Samsung, software running on embedded systems—powering mobile phones, cars, and IoT devices—presents complex needs that require a specialized approach to DPE. In this talk, learn why Samsung created Code Aware Services (CAS) to bring forward data about builds and source code to further refine their developer productivity metrics and initiatives.
Watch the video
Ana at Nexthink describes their dedicated developer productivity organization which prioritizes developer feedback based on regular surveys that offer data-driven insights. By investing in boosting engineers’ satisfaction and productivity, they’ve expedited feedback cycles and created an easier onboarding experience via internal self-service platforms.
Watch the video
Anna’s keynote unveils Airbnb’s DPE best practices: internal surveys for gauging developer productivity, DevX metrics for tracking progress, and Airdev—an internal platform for reducing cognitive load on developers that helps tackle issues like slow feedback loops and flaky tests.
Watch the video
The Doordash team share their experiences exploring Gradle vs Bazel for building a mono repo. They share the challenges they faced and how certain Gradle Build Tool features helped solve them. They discuss custom plugins, composite builds, and version catalogs. If you’re working on builds/tests in large repos, this talk is for you.
Watch the video
For Adrian and Bartosz at Samsung, software running on embedded systems—powering mobile phones, cars, and IoT devices—present complex needs that require a specialized approach to DPE. In this talk, learn why Samsung created Code Aware Services (CAS) to bring forward data about builds and source code to further refine their developer productivity metrics and initiatives.
Watch the video
Christopher at Airbnb describes his experience with DORA metrics and identifies some DORA do’s and don’ts. For example, DORA is useful for developing a common language and starting conversations about developer productivity; however, it’s not ideal for gauging the success of specific projects. In that context, looking at build wait times, test pass rate, and work environment factors can be more meaningful.
Watch the video
Gautam and Serdar at Uber explore whether Generative AI can match the productivity of a “10x developer”. They discuss results using Generative AI for tasks like refactoring, maintaining tests, incident management, and documentation enhancement. Their analysis reveals that while Co-Pilot and other tools make developers feel more productive, they produce only an average of 1-2 lines of usable code.
Watch the video
Grant from the LinkedIn developer insights team shares how they capture productivity engineering metrics from teams/products/projects at LinkedIn. Grant shares how they collect, aggregate, analyze, and visualize the metrics for engineering leaders and productivity champions. Grant also surfaces examples of impactful metrics, such as the median duration that PR authors wait for feedback in code reviews.
Watch the video
Jake shares how previously the Cash App Android, iOS, and web apps were all developed natively, resulting in two-week release trains for mobile apps with 1-2 week rollout periods. By using Kotlin Multiplatform, they were able to substantially improve those deployment times to get their apps released faster.
Watch the video
Lee highlights Spotify’s dedication to DPE through their investment in Backstage, an internal developer portal that they later donated to the CNCF. Lee explains how Backstage—now supporting over 4 million external developers—aligns directly with a key DPE goal: to accelerate developer productivity by eliminating distractions and delays.
Watch the video
Louis dives into how the standard performance optimizations that enhance developer productivity with Gradle Build Tool can be hindered in a stateless, ephemeral CI environment. He then shares which performance features make sense in these environments and walks through how to optimize Gradle Build Tool for various use cases.
Watch the video
The Engineering Platform and Integrated Experience team at JPMC tells their story about how they boosted developer productivity with LLMs. Leveraging metrics, traces, and logs, along with other data, they produced a self-service, natural language interface that assists developers, CI/CD engineers, product managers, and other stakeholders in rapidly discovering and applying insights to many sorts of queries.
Watch the video
Max’s keynote humorously explores developer unproductivity, before describing some more serious DPE strategies like internal surveys, diverse team tooling, and flexible standards. His 20 years of insights may provide clues to finding your own successful path to DPE excellence.
Watch the video
Neil at Meta describes developer productivity using one of Meta’s internal build tools, Buck 2. He discusses performance improvements made since Buck 1, new features like abstraction through APIs, parallel and incremental compute, remote execution, and the use of virtual files to improve developer productivity.
Watch the video
Block has one of the largest Android application development teams on the planet. They share how they managed their IDE scaling challenges while growing to 4500+ modules. Get their lessons learned from managing sync times, memory leaks, and more. If you have large projects and are struggling with IDE experience, this talk is for you.
Watch the video
Ravikumar at Adobe discusses seven productivity factors for high-velocity teams, including internal developer platforms, tailored metrics from DORA and SPACE frameworks, and the role of Generative AI at Adobe. He emphasizes the positive impact of internal platforms for enhancing developer productivity as well as how Generative AI is beginning to transform DPE at Adobe.
Watch the video
Rob from JPMC shares his lessons-learned from capturing developer experience metrics across their developer organization. He shares what metrics led to developer happiness and how those metrics impacted job satisfaction and productivity.
Watch the video
Gautam and Serdar at Uber explore whether Generative AI can match the productivity of a “10x developer”. They discuss results using Generative AI for tasks like refactoring, maintaining tests, incident management, and documentation enhancement. Their analysis reveals that while Co-Pilot and other tools make developers feel more productive, they produce only an average of 1-2 lines of usable code.
Watch the video
Valera explains why ad hoc code cleanup doesn’t scale. He shares his team’s lessons-learned from handling tech debt at Slack with a code health score system. The impact of the Slack health score case study and their stats on pull requests is particularly interesting. Pro Tip: code health and tech debt impacts developer happiness.
Watch the video
The Doordash team share their experiences exploring whether to use Gradle vs Bazel for building a mono repo. They share the challenges they faced and how certain Gradle Build Tool features helped solve them. They discuss custom plugins, composite builds, and version catalogs. If you’re working on builds/tests in large repos, this talk is for you.
Watch the video