In this episode, Scott discusses three concepts that are at best a concern. Consider it a late Grinch-inspired present for Xmas 🙂
Reverse ETL meets a real need for analytical data being pushed into CRM, marketing, and other similar systems. But treating another pipeline as a first order concern is fraught with the same issues of most similar data pipeline treatment: who owns it, how does it evolve, who is observing/monitoring it for uptime and semantic drift, etc.? Should we look to create data products on the mesh to serve those needs instead of another ETL tool?
Some organizations implementing data mesh are forcing their domains to consume any analytics from their own data products on the mesh. The good of this is that it aligns the domain with creating a high-quality data products. But will those data products be designed to fit the general organizational needs or specifically the domain’s needs?
There is an emerging push for software engineers to also own the data modeling. To get to a place where this is even feasible, don’t we need far better abstractions for domains to _do_ the data modeling? And will this overload software engineers that are already dealing with a metric buttload of technologies and requirements already? Where would a junior engineer fit in that kind of organization? Does this mean more software engineers on the team -> 2 pizza teams now 3? 4? 5? 10? Maybe we pump the brakes on this for now?