Sign up for Data Mesh Understanding’s free roundtable and introduction programs here: https://landing.datameshunderstanding.com/
Please Rate and Review us on your podcast app of choice!
If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here
Episode list and links to all available episode transcripts here.
Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.
Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center here.
What the Heck is a Data Mesh?! Post: https://cnr.sh/essays/what-the-heck-data-mesh
Chris’ Twitter: https://twitter.com/criccomini
Chris’ LinkedIn: https://www.linkedin.com/in/riccomini/
Chris’ website: https://cnr.sh/
The Missing README book: https://themissingreadme.com/
In this episode, Scott interviewed Chris Riccomini, a Software Engineer, Author, and Investor. Chris led the infrastructure team at WePay when they embarked on a data mesh journey and made a well-written post on thinking about data mesh in DevOps terms.
Like a number of people/organizations that have come on the podcast, at WePay, Chris was pursuing the general goals of data mesh and was applying some of the approaches as well – but it was not nearly as cohesive as Zhamak laid things out.
Their initial setup had two teams managing the pipeline/transformation infrastructure. Chris’s team was mostly handing the extracting and loading and then there was a team of analytics engineers handling the transformations. The Transformation team saw a major increase in demand and quickly became overloaded -> a bottleneck. Chris’ team also started to get overloaded so they knew they had to evolve.
One way the team started to address the bottlenecks was by decentralizing the pipelines. Teams could make a request and a scalable and reliable pipeline would essentially get automatically set up for them. WePay is in the financial services space so as part of those pipelines, to prevent risk, teams could mark their sensitive/PII columns and the infra team also put in some autodetection capabilities to make sure they didn’t miss any.
WePay created a “canonical data representation” or CDR, which is pretty analogous to a data product in data mesh. Chris really liked WePay’s use of the embedded analytics engineer to serve as a data product developer.
One key innovation for WePay was tooling to enable safe application schema evolution. They looked for things like dropped columns and had more comprehensive data contract checking mechanisms. It allowed developers to test changes pre-commit. 80-90% of data breakages were things the developers had no idea would cause an issue and they reverted those changes. 10-20% of the time, the developers still wanted to go through with the changes and that kicked off negotiations with data consumers. That forced conversation was very helpful for a few reasons.
Chris talked about standardizing around technologies for the platform but allowing teams to roll their own if they wanted. But they were super clear with those teams that the infra team wouldn’t support those other technologies, even if it was okay to use it. He also sees a major need for an API gateway concept for data. Currently, everything around versioning, auto-documentation, etc. is way too manual and high friction.
Chris talked about taking the learning from DevOps and applying them to data mesh. One good one to look at, per Chris, is the embedded SRE concept – should you do the same with a data or analytics engineer? There is also a need for many standards and replicable patterns. They launched a data review committee, similar to a design review committee, that helped come up with standard data models and other standards.
Sherpas not gatekeepers – build out your review functions as councils to guide and disseminate knowledge. The team’s role should be about assisting where they can, being a trusted partner. And what WePay saw was as people went through more reviews and similar, they saw there was less of a need for them as people learned what good/best practices were.
Lastly, Chris wrapped on a point others have made about the cost of making a mistake. You need to make it as low as possible. Back processing data can be very costly in time and compute.
Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/
If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/
If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here
All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf