#71 Adventures in Data Maturity – Creating Reliable, Scalable Data Processes – Interview w/ Ramdas Narayanan

Sign up for Data Mesh Understanding’s free roundtable and introduction programs here: https://landing.datameshunderstanding.com/

Please Rate and Review us on your podcast app of choice!

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

Episode list and links to all available episode transcripts here.

Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.

Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center here

Ramdas’ LinkedIn: https://www.linkedin.com/in/ramdasnarayanan/

In this episode, Scott interviewed Ramdas Narayanan, Vice President Product Manager of Data Analytics and Insights at Bank of America. To be clear, he was not representing the company and was sharing his own views.

Ramdas came on to discuss lessons learned from building effective data sharing at scale on the operational plane over the last 5-10 years so we can apply those to our data mesh implementations.

A key output of the conversation is a guiding principle for getting data mesh right – your goal is to convert data into effective business outcomes. It doesn’t matter how cool or not cool your platform is or anything else – drive business outcomes! It’s easy to let that get lost in the tool talk and everything around data mesh.

Per Ramdas, when looking at creating a data product, or really any data initiative, you need to align first on business objectives and that will drive funding. In the financial space, that is direct literal funding but even outside, you should have the same mindset. Make sure you get engagement and alignment across business partners, technologists, and subject matter experts. How are you using technology to address or solve the business problem?

Ramdas has seen that if you don’t focus on creating reusable data, you can create silos – you need cohesive data sets, not bespoke data sets for every challenge as that just doesn’t scale. You should also study the data sources you are using – is there additional useful data you could add to your dataset or could you use that data for other purposes – keeping an eye out for additional data to drive business value will really add a lot to your organization.

When working with developers, Ramdas recommends helping them understand how the business is going to consume and use the data and then figure out if they should deliver data as something like an API or web service or more of a custom batch delivery. It is important to also work with data consumption teams to be reasonable in their consumption demands – getting them to modernize can be a challenge and that can put an unreasonable burden on producing teams.

Ramdas talked about how crucial conversations and culture are to getting data projects/products right. Sometimes the conversations can be tough but often they really aren’t and there just needs to be open exchange of context and information, especially aligning on business objectives. Projects that fail typically have poorly defined business objectives or lack alignment.

Per Ramdas, it is important to educate the business people on what data exists and even what data doesn’t. That clouded vision of what data is available creates a lot of frustration – we need to get better in general at data discoverability so the business folks can know what is available and get access easily. Ramdas has seen repeatedly that good context via rich metadata also leads to better context sharing at the person-to-person level as it generates additional conversations.

To emphasize that point a bit more, Ramdas believes that data discovery is the main spark for sharing context. Otherwise, we are at best exchanging data as 1s and 0s instead of the actual information.

Ramdas believes everyone needs to understand how information flows through your systems – it can help you better understand the art of the possible and also identify gaps in how you will approach your challenges. Start your projects, whether that is a new data product, a new platform feature, or anything else, by having a lot of information architecture meetings. After that, start to focus on data discoverability. “Show and tell” sessions have worked well for him as they spark new thoughts and can help surface issues.

Ramdas wrapped on a really crucial part of data maturity which is the curiosity factor. Always be asking why you are doing something, what problem are we actually trying to solve? Do we have the capabilities to solve them? How does the data flow through our systems? Can we push data quality upstream to prevent quality issues instead of remediate them? What guardrails can we put in place to prevent issues? How can we enrich our metadata to make this data even more valuable? Etc.

Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/

If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf

Leave a Reply

Your email address will not be published. Required fields are marked *