#119 Cautionary Learnings From a Startup Doing Data Mesh: Orfium’s Journey to Decentralized Data Success – Interview w/ Argyris Argyrou and Konstantinos Siaterlis

Sign up for Data Mesh Understanding’s free roundtable and introduction programs here: https://landing.datameshunderstanding.com/

Please Rate and Review us on your podcast app of choice!

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

Episode list and links to all available episode transcripts here.

Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.

Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center here.

Argyris Argyrou’s LinkedIn: https://www.linkedin.com/in/argyrisargyrou/

Konstantinos “Kostas” Siaterlis’ LinkedIn: https://www.linkedin.com/in/siaterliskonstantinos/

In this episode, Scott interviewed Argyris Argyrou, Head of Data, and Konstantinos “Kostas” Siaterlis, Director of Big Data at Orfium. There is a ton of useful information on anti-patterns, what is going well now, advice, etc. in this one.

From here forward in this write-up, A&K will refer to Argyris and Kostas rather than trying to specifically call out who said which part in most cases.

Some key takeaways/thoughts from A&K’s points of view:

  1. On a data mesh journey: “It’s not a sprint, it’s a marathon.” Pace yourself. It’s okay to go at your own pace, don’t worry about what other people are doing with data mesh, do what’s right for you.
  2. Really focusing on the why and showing people results was a far better driver to buy-in and participation than any amount of selling about data mesh as a practice. Calling it data mesh when trying to explain it to people outside the data team didn’t go well either…
  3. Orfium’s “Data Doctor” approach – a low friction and low pressure office hours for a general staff data engineer – has really helped people help with data challenges and in spreading good data practices but without the “Doctor” becoming a bottleneck.
  4. The Data Doctor’s role is to answer questions and provide guidance but not do the work for people. Then, take what was discussed and the best practice and document it for others to learn from – providing good leverage for scaling best data practices.
  5. In a smaller company like Orfium (~250 people), it’s hard to justify a lot of full-time heads to implement data mesh. And trying to treat a data mesh implementation like a side-project also creates issues. There isn’t a great answer here on exactly what to do except possibly take things slower than most startups are used to. Your data will still be waiting for you a few months later.
  6. If you are having difficulty driving broad buy-in, showing people what data mesh can do in action really helped at Orfium. Once they saw the approach delivering value, they wanted to participate.
  7. When trying to drive buy-in, specifically talking about data mesh didn’t work well with non data folks. It’s very easy to get confused around data mesh for data folks – just imagine it for non data folks.
  8. Trying to use Zhamak’s articles as the optimal early state – where you need to be just to get moving – requires far too much work. Get to a place where you can try, learn, iterate, and repeat on your way to driving value. It’s a journey!
  9. It’s probably not a great idea for your first use case to be your most advanced or complicated – you will build your platform to focus on serving those needs instead of general affordances. Jen Tedrow’s episode covers this quite nicely.
  10. Really assess how much additional work your data products will be for a data product owner. For Orfium, it was something to add to the existing product managers’ plates as it wasn’t a huge incremental burden just yet.
  11. Consider splitting your mesh data product ownership between business context ownership and technical ownership.
  12. It’s okay to head down the data mesh path while learning what domains really mean. Orfium was not doing Domain Driven Design in any sense before starting to do decentralized data.
  13. Don’t try to get all your teams to start developing data products at the same time. It is very hard to work with and upskill that many teams, even in a smaller organization.
  14. A&K recommend to start slower than you probably want on building out your platform; and start small in general, don’t take on the biggest challenges or too many challenges at once.
  15. It’s okay to have a very high-level concept of a mesh data product. For Orfium, that is: it’s a product that is designed like any other software product – it solves problems for customers and is delivered via API. That helps the non technical people understand what they are delivering.
  16. “Really think about what you are doing and why. Why complicate it more than that?” – said about data products but pretty universal in data mesh. It’s easy to overcomplicate…

About 2.5 years ago when Argyris joined the company, Orfium – a growing startup in the music industry royalty business – was really starting to see a big uptick in data requirements to serve their customers and offer new features/capabilities. They had 3-4 people doing the data engineering and data science work and another centralized BI team but the need for more advanced ML and AI was becoming clear. The centralized data capabilities were becoming a bottleneck so either they’d need to significantly scale their number of people in centralized data functions or look to decentralize in some way. They decided to try data mesh because they were feeling the exact challenges Zhamak so clearly laid out in her articles.

For Kostas, he was really bought in to data mesh’s self-serve aspect. Pretty much all the data in the company for analytics was flowing through him and his small team and that was understandably draining. Data mesh is more of a cultural approach than a technical one. He said, “I will build something interesting either way” so the loss of certain data engineering tasks meant he could focus on building the platform, which was just as or more interesting.

At first, for A&K, data mesh was definitely a bumpy road – they tried to do data mesh as a side project while keeping up with everything else they were doing. They couldn’t justify treating the data mesh implementation as a main focus at the expense of the many things in production. They didn’t have the spare headcount to do it either. As they built the first version of the data platform, they were having difficulty explaining data mesh to non data folks. So, the team most bought in and willing to try using a data mesh approach was the central data engineering team. Thus, their first version of the platform was advanced data tooling for data intensive use cases with the central data team as the main users. It worked well for the central data team but once they tried to get other users, the platform wasn’t really built for people who weren’t highly data literate with intensive use cases so driving buy-in with other teams was hard.

On trying to drive buy-in, A&K worked to explain what they were doing and what data mesh was internally. They started from the why, why would this matter and that got the exec team excited. But when they tried to sell data mesh to the engineering managers and engineers, it fell flat. The non data folks didn’t really understand the nuances of data mesh – and they didn’t really need to in most cases to participate and benefit from the data mesh implementation. It’s very easy to overwhelm people with all the aspects of data mesh instead of what matters.

What ended up driving buy-in quite well, per A&K, was seeing the output of treating data like a product in action. Once there were tangible benefits and people could see what value a mesh data product could deliver, they were much happier to move forward in participating in the data mesh implementation.

Per A&K, a few of their missteps – so you can avoid doing the same – were: 1) trying to sell data mesh by the principles and using the phrase “data mesh” instead of the why and what changes for who they were talking to; 2) building the platform to serve the most data intensive use cases owned by data engineers so it was quite hard and not really suitable for others to use; 3) trying to get to advanced maturity in all parts of the data mesh implementation up front – e.g. they don’t have fully automated access control but it’s not really a big pain point; 4) thinking that Zhamak’s articles or book are the blueprint for where you have to be at early stages instead of an inspirational goal multiple years into a data mesh journey; and 5) trying to get all teams to move together at the same time with data mesh instead of working team by team or mesh data product by mesh data product.

So, what _did_ work for A&K at Orfium in their data mesh journey? Again 1) showing people the results and the value the approach could deliver for a use case; 2) not hiring new people into roles – when they looked at the additional workload, with good support and upskilling/partnering, the domains could handle it as they were – per Scott, this tends to be the case in smaller companies with smaller domains doing decentralized data; 3) focusing on the why: why does this matter? If we get this right, what will that get us? And 4) using data mesh as an enabler to change people’s hearts and minds about owning and using data; it’s now more a core part of teams’ responsibilities and they are taking it seriously.

What does good look like for Orfium in their data mesh journey right now? Per A&K, their teams understand the difference between operational and analytical data and are starting to manage their analytical data as a product. It has changed the role of data engineering and how people perceive the central data team internally to being the enabler, not the team that does the work for them. Their ML/AI teams are able to get quality data reliably so they can build out new use cases. They still don’t have fully automated access control but focused on making it far easier to request and grant access and that’s a good enough point for them at the moment. They are at 10-15 products – hard to say exactly what constitutes a product, hence the range – with far happier data consumers.

Orfium wasn’t doing Domain Driven Design (DDD) on the operational or data side of the house prior to starting on their data mesh journey. And A&K think it’s totally viable to not be doing DDD at all before starting. Other guests on episodes more centered on DDD for data have said similar things. This is permission to move forward.

A&K gave some insight to how attitudes and understanding relative to data have changed at Orfium. Previously, someone would want to do some analytical work against a team’s data and the product manager would get them some kind of DB access to the data, so they could only access the data as it was stored for the operational system. There wasn’t a clear separation between operational data and analytical data. So a lot of the evolution was just getting them to understand how their data might be used for analytics and produce and own data to be used for analytics. Far easier said than done but still achievable. And what also really helped was splitting business and technical analytical data ownership.

So, with all this learning behind them, what are a few bits of advice from A&K? Start slower on the technology build out; it’s exciting to build cool stuff but that can wait – what do you need to get to the low-hanging fruit value. Start small, don’t try to have all the domains (or if you are not doing DDD yet, your teams) move all at once. Don’t mention the phrase data mesh to people outside the data team – it typically just generates confusion – speak to the value of the approach and not the technology or what it changes for the data team – what does it change for them?

When working with teams to understand the concept of a mesh data product, it’s easy to overcomplicate things per A&K. For Orfium, there is a technical definition of a mesh data product and a business one. The business definition is quite simple: it’s a product designed like any other software product – it needs to solve a problem for customers and is delivered via API. Really think about what you are doing and why, “why complicate it more than that?” The data products create a platform of information for teams to build data-informed applications on top of. Oh, and don’t forget really good documentation for your data products.

Orfium has a really interesting concept they are using internally: “The Data Doctor”. Essentially, people go with their “data symptoms” and the data doctor gives them a “prescription” – advice on how to address their challenge using best practices. It’s a low pressure way to have something like a staff data engineer hold office hours to help people with their data challenges to ensure people follow best practices but also learn how – and have the confidence – to implement the recommendations themselves. And then, the Data Doctor needs to work with whoever implemented their advice to document the process and put it in a central repository so others can easily follow the same practices if they hit the same or a similar challenge.

Per A&K, on your data mesh journey: “It’s not a sprint, it’s a marathon.”

Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/

If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf

Leave a Reply

Your email address will not be published. Required fields are marked *