What to expect when transitioning from IC to engineering manager

Making the move from individual contributor to EM? Here’s everything you need to know.

Source post→

Why we need Staff Product Managers

Product managers across all levels have to demonstrate a skill set grounded on 3 wide pillars: Knowledge of their Domain, Product Vision and Strategy, and People Skills. In addition to all the responsibilities a senior PM has, Staff PMs are responsible for defining the product vision and strategy of a solution or go-to-market motion in collaboration with Product Leadership, and coordinating work across multiple product teams.

Source post→

How to build a healthy relationship between engineering and product

Struggling to work productively with your product partner? Here are four ways to build great rapport.

Source post→

Upgrading Data Warehouse Infrastructure at Airbnb

These challenges have motivated us to upgrade our Data Warehouse infrastructure to a new stack based on Iceberg and Spark 3, which addresses these problems and also provides usability improvements. At Airbnb, the Hive-based data ingestion framework processes >35 billion Kafka event messages and 1,000+ tables per day, and lands datasets ranging from kilobytes to terabytes into hourly and daily partitions.

Source post→

Prioritizing High-leverage Activities

The one quality of my work that has been praised the most is its efficiency. It’s something that’s always come naturally to me. Thus, over the years I’ve been led to believe I’m an efficient developer. But somehow I’ve always failed at sharing this efficiency with my coworkers. And what’s worse, also at explaining why I manage to finish tasks at the rate that I do. If I’m I so efficient, what can I do to improve the efficiency of my team? I still can’t fully explain it, but I’ve found some concepts in The Effective Engineer by Edmond Lau (a great book for engineers and non-engineers alike) that strongly resonate with me. The core tenet of the book is to prioritize high-leverage activities. High-leverage activities What is leverage? The book defines it as value produced per time invested. Since the time we spend at our desks is limited (as it should be), in order to increase the leverage of a task we can only… have the task yield more value. complete the task faster. do a different, more valuable task. For each of those ways, I’ll share one piece of advice with you. Do the same thing and achieve more: teach along the way If you work alone, it’s pretty much impossible to do the same thing for the same amount of time and have it yield more value. But if you work in a team, you may be able to. One side effect of you sitting down at your computer and producing quality software is that someone could learn a few good practices from it. If there’s someone new in your team, they may learn a lot just from watching you work and hearing you think out loud over a video call. Even if doing this slows you down a little bit, it’s likely that the overall balance will be positive. Finding the time to actively onboard newcomers to a project can be hard, especially when there are junior developers and/or deadlines in the mix, so this can be an inexpensive way to transfer some knowledge without stopping development. Complete tasks faster: try new tools Practice makes perfect, or so they say. Whole articles could be written on how true or untrue that is. However, it’s much less controversial that practice builds speed. Since we developers build software for many hours a week, practicing more than we already do is not a viable path. If we want to get things done faster, it’s not just about doing more, but about doing it differently. “How can I build this feature faster?” is a line of thought that can be worth exploring, but it’s not what I want to focus on. I’d rather shine some light on the underlying skills instead. If you invest the time to learn how to do simple tasks faster, you’ll speed up the more complex tasks they conform. Being efficient at the most basic tasks can save a huge amount of time. My advice here is to not be afraid to try out new tools or learn new keyboard shortcuts. Any momentum you lose while adapting to the new way to do things, you’ll earn back many times over when that repetitive task gets done faster. Sometimes, the vast experience we have with the tool we’re used to doesn’t feel worth discarding in the search for an efficiency boost. Been there, done that. However, I try to remind myself that even if I have to go back to using my original tool, I still get some value out of trying a new one. The ability to adapt is essential in the ever-changing tech world. Trying new tools helps hone that skill in a low-stakes context. Of course, if a tool simply doesn’t work out for you, there’s no sense in sticking with it. But there’s no reason not to try your luck again with the next tool that comes along. Which tools to use is a highly personal choice. What works for me may not work for you. I’ll share a list anyway, just in case you feel inspired to try something new and want to make use of that momentum right away. Vim plugin for VS Code. The learning curve for Vim is quite steep, but it pays off. tmux + Tmux Resurrect to quickly switch between projects and never lose your terminal context after system restarts. Oh My Zsh git aliases. gpsup instead of git push -u origin [branchname]? Sign me up. Dedicated keyboard shortcuts to switch to the terminal, browser and text editor. Alt + Tab requires you to cycle through all your running applications and that’s quite inefficient. Learn the most useful Slack keyboard shortcuts. Many people forget about shortcuts and revert back to point-and-click when not in the text editor. (Cmd + K and Cmd + [ are my best friends now, look them up). Do a different task: automate it! Most of the time, low-leverage tasks are simple enough to automate, at least partially. The different task I suggest you to do is the automation itself. Is the automation effort worth it? As a general rule, I’d say yes. But there’s always a relevant XKCD, so you don’t need to believe me, just look it up on the table. The table doesn’t even factor in that the boring task at hand may also be performed by other people on your team. On large teams, the cumulative time saved by an automation could be enormous. Automating repetitive processes is one of the highest-leverage activities out there. If you’re not yet convinced, I have an example for you. At a project I was in two years ago, the deployment process was detailed in a long Notion document. How to deal with risky changes, what status checks to perform before deploying and how to deal with edge cases was all covered in that document. At first, we had only two deployers. They knew the process by heart and only referred to the document occasionally. But as the team grew they were quickly overwhelmed, so our manager decided that everyone was to start deploying their own changes. For the new deployers, the process was stressful, slow and error-prone. The document wasn’t well-structured and forgetting a small step could spell disaster. Between steps, continuous integration checks would sometimes take upwards of ten minutes, so while they ran you would wait while repeatedly checking if the next one was ready to be started. Since the deployment pipeline was blocked during the whole process, developers were expected to fire the next step as soon as possible (otherwise the other devs in the release queue would have to wait unnecessarily). After one particularly long and tiresome deploy, I decided to write an interactive CLI that guided you through the steps by asking simple questions (such as How many PRs do you wish to deploy? or Do your changes include migrations?). Using the GitHub, Heroku and Ghost Inspector APIs, we automated the manual checks for the jobs and notified the user when the next step was ready to be executed. We used to have around five deployments per day. And can we can say the script saved five minutes per deployment? Honestly, that’s a conservative estimate, it was probably more. According to the chart, it would’ve been worth automating if the automation effort took four weeks or less (considering it worth it if the time investment pays itself off in five years). Writing the script took me at most eight hours. Those eight hours were by far the highest-leverage task I ever did at that project! Closing thoughts All of the aforementioned principles can be applied outside of software development too. Personally, I find myself doing optimizations like this in my daily life without even noticing. The latest one: fastening the gate remote to my bike automated the process of taking off one glove to find the remote in my pocket, using it, then putting on the glove again. That’s all for today. If this post inspired you to set up a pair-programming session, to try out a new tool or to automate some inefficient process, I consider it a success.

Source post→

Build Time Optimizations (Xcode)

As an iOS developer, we have encountered this problem frequently whereby, after starting the build, it takes a long time to get compiled and built which in turn tends to distract us from our focus zone and reduces productivity. Note: Above shared summary is from an incremental build and we can observe that without doing any changes CodeSign took a good amount of time which somehow could have been avoided.

Source post→

Five Common Data Quality Gotchas in Machine Learning and How to Detect Them Quickly

The vast majority of work in developing machine learning models in the industry is data preparation, but current methods require a lot of intensive and repetitive work by practitioners. This includes collecting data, formatting it correctly, validating that the data is meaningful and accurate, and applying transformations so that it can be easily interpreted by ... The post Five Common Data Quality Gotchas in Machine Learning and How to Detect Them Quickly appeared first on DoorDash Engineering Blog.

Source post→

Build real-time video and audio apps on the world’s most interconnected network

We are announcing Cloudflare Calls, a new product that lets developers build real-time audio and video apps

Source post→

Quantization for Fast and Environmentally Sustainable Reinforcement Learning

However, recent work [ 1, 2 ] indicates that performance optimizations on existing hardware can reduce the carbon footprint (i.e., total greenhouse gas emissions) of training and inference. To that end, we present “ QuaRL: Quantization for Fast and Environmentally Sustainable Reinforcement Learning ”, published in the Transactions of Machine Learning Research journal, which introduces a new paradigm called ActorQ that applies quantization to speed up RL training by 1.5-5.4x while maintaining performance.

Source post→

Evolution of Streaming Pipelines in Lyft’s Marketplace

The journey of evolving our streaming platform and pipeline to better scale and support new use cases at Lyft. As product iteration speed increased over time, this older infrastructure became unable to support the faster development cycles, primarily because all the ML feature generation required writing custom logic which would take multiple weeks to develop.

Source post→

A Flexible Framework for Effective Pair Programming

It shortens the time it takes to start making a positive impact with their work by developing their technical and soft skills like problem solving and communication. This framework covers everything you need to run a successful pair programming session, including: roles, structure, agenda, environment, and communication.

Source post→

How to Choose the Best Time-Series Database for Your Project

In a world with more data, where measuring what matters is critical, your time-series database needs to be scalable, reliable, and user-friendly. Here’s how you can choose the best time-series database for your project.

Source post→

How thermal simulation helps optimize Meta’s data centers

Data center optimization has always played an important role at Meta. By optimizing our data centers’ environmental controls, we can reduce our environmental impact  while ensuring that people can always depend on our products. With most other complex systems, optimization of energy consumption is a trial-and-error process. But experimenting on any component of a live [...] Read More... The post How thermal simulation helps optimize Meta’s data centers appeared first on Engineering at Meta.

Source post→

Demystifying Modern Data Platforms

with Mark Ramsey, PhD - Chief Data Officer The post Demystifying Modern Data Platforms appeared first on Cloudera Blog.

Source post→

MemLab: An open source framework for finding JavaScript memory leaks

We’ve open-sourced MemLab, a JavaScript memory testing framework that automates memory leak detection. Finding and addressing the root cause of memory leaks is important for delivering a quality user experience on web applications. MemLab has helped engineers and developers at Meta improve user experience and make significant improvements in memory optimization. We hope it will [...] Read More... The post MemLab: An open source framework for finding JavaScript memory leaks appeared first on Engineering at Meta.

Source post→

How a Product Studio Mitigates User Friction with Performance Monitoring

Our developers easily identify and resolve performance bottlenecks – like querying the database while iterating over a collection rather than prefetching –, resulting in up to orders of magnitude fewer database queries, and shorter response times.

Source post→

Adopting SwiftUI with a Bottom-Up Approach to Minimize Risk

A topic of great debate among many engineers is whether or not SwiftUI is ready for enterprise. It’s no secret that DoorDash has fully embraced it in our Consumer app, as we recently held a technical meetup to share many of the challenges we’ve overcome. Our biggest challenge, however, has been dedicating enough time to ... The post Adopting SwiftUI with a Bottom-Up Approach to Minimize Risk appeared first on DoorDash Engineering Blog.

Source post→

How to succeed by getting good at failing

Making mistakes is how humans learn. Here's how to embrace a growth mindset and find opportunities in failure.

Source post→

Scaling Git’s garbage collection

A tour of recent work to re-engineer Git’s garbage collection process to scale to our largest and most active repositories.

Source post→

How Stripe builds interactive docs with Markdoc

Delivering a good user experience without compromising the authoring experience required us to develop an authoring format that enables writers to express interactivity and simple page logic without mixing code and content. While developing Markdoc, we learned how to balance interactivity, customization, and authoring productivity while undertaking a major overhaul of our documentation platform.

Source post→