S2E11: Ingest, Store, Model, Process, Serve: The Core Components of Data & Analytics Technology

Episode Thumbnail
00:00
00:00
This is a podcast episode titled, S2E11: Ingest, Store, Model, Process, Serve: The Core Components of Data & Analytics Technology. The summary for this episode is: <p>This week's episode brings back Shaun McAdams and Warren Sifre to continue their discussion on the 5 Pillars of Data &amp; Analytics. </p><p><br></p><p>This time their focus is tech, all the equipment, systems, gadgets, and doodads required to meet your Data &amp; Analytics needs.</p>

Angel Leon: Hello everyone. And welcome to another edition of ASCII ANYTHING, presented by Moser Consulting. I'm your host Angel Leon, Moser's HR advisor. Today's episode continues a series of conversations from season one, between Shaun McAdams and Warren Sifre. Two of Moser's top data analytics experts. Shaun is Moser is vice- president of data analytics, and Warren is the director of strategy within our data analytics group. In this week's episode, they're talking about the tech they use when implementing a data analytics strategy. They're building on their previous conversation. So this is another special episode where they continue developing the data and analytics world and how you could apply it to your business. Without further ado, here are Shaun McAdams and Warren Sifre.

Shaun McAdams: All right, thanks Angel. So Warren, we're back at it again. crosstalk For those that are just joining this one, they haven't heard Warren and I talk, we've been talking about data and analytics strategy. So how does an organization prepare themselves to actually deliver data and analytic products? Previously, we've talked about people. We've talked about process, which we both agreed was the biggest gap we see for people that are trying to deliver data and analytic products. We did a sidebar conversation with Databricks, and we talked about the different types of data architectures that have existed to make analytic products a reality. So we talked about enterprise data warehouse, we talked about data lake, we talk about lake house, which is something that's new in the past year or so. Today we're going to tackle one of the last two pillars, and talk about tech. So we have data left, we have tech, we're going to talk about tech, not today. But we've not always socialized tech the way we do now. So why don't we maybe take a little bit and just talk about the different types of questions that we get or engagements we get, and how we have maybe now rolled that back to what we socialize.

Warren Sifre: Moser being a consultant firm, we find ourselves in a position where we're being asked to come into an organization, help them out in the various stages of their data maturity model. They're either, " Hey, I'm trying to replace SSRS or Cognos with something more modern to, look, I want to do some machine learning or I want some MAI, or I want to establish a data warehouse, or I want to go from on- prem to cloud. And how do I modernize? What's the path and what are the platforms and technologies in place?" And coming into these organizations, these various stages, we've had the privilege of being able to architect for them and then step back and look at all these different solutions that we've done and establish some common core aspects that, if you've got this piece, this will get you this level of type of reporting this type of analytics. And it also sets a stage up for you to be able to leap frog to the next level of that maturity model as you go through. And it's really important to recognize that, whatever strategy from a technology perspective you're taking from, you've got to take that into account or else you're going to find yourself in a corner, you're going to find yourself rinse and repeating, replacing technology, again, re- engineering things that you've done before, because it wasn't taken into account how the pieces will play together and how technology will evolve. So we've come up with an architecture that is vendor agnostic. We talk about, what components would be necessary to be able to accomplish certain things? And I guess the piece that we want to start with in this journey in this conversation, is what are those things we need to be able to accomplish? What are the goals of this platform?

Shaun McAdams: Right. Yeah. So regardless of what type of tech, regardless of if you're on- prem or you're in the cloud. If you're getting into creating and delivering data or analytic products, what are the core competencies needed by a platform in order to do that type of work? Now, before we talk about those core competencies, there are other tools that you need to deliver work. For example, you might have to integrate with some type of active directory, some type of an IDM. You might want a tool set for your people to help manage their work. So like an agile type of platform, you're probably going to want a Persist code somewhere and make sure that whoever's working on that, that we're doing that in a very mature way. So we're not going to talk about those particular pieces that help support the people delivering, but we're going to talk specifically about what type of technology capabilities do we need in order to do data and analytics well? So what are those core components that regardless of architecture, doesn't matter if you're on- prem, doesn't matter if you're in the cloud, what does an organization need in order to do this well?

Warren Sifre: Well, there are five characteristics and components that we feel need to be included in the decision- making of what products you're going to use, and how your current state of your data maturity model is and where you want it to go and how to get there. So the first one is an ingest layer. We need something to be able to bring data together. Pull it in, transform it from the source systems where it's at and bring it into something. We need to be able to store it. We need to have a repository where we can land this data and have it sit there and have it be just a tool, just another asset for you to use when you're ready to use it, it's there. We need to be able to model it because this data that comes in is going to be in a shape that may not be usable by some downstream systems. So we need to be able to model it in a way that allows us to be able to locate the data we need, find it. We need to be able to process this data in some way, because as I alluded to earlier, you're storing the data, you need to model it. There's probably a transformation that's taking place there, especially if you're pulling data from multiple sources. If you've got three or four different ERP systems, because the line of business you happen to be in, or the inaudible market, you're in, you're acquiring new businesses, you got to make all that together to be able to get your bottom line. So how do you do that when the systems are disparately different? And then the last piece is to be able to serve that content up. You built this model, you process it, you stored it permanently, you've got the means of getting it in. But now how do you make it available for people to use in something like Tableau, Power BI, Qlik Sense, SiSense, just some different tool or something on the other end? How do you serve that up? So ingest, store, model, process and serve.

Shaun McAdams: Yeah. So now when we go and we interface with clients that are at different levels, to your earlier point they could have a particular technical deficiency. It could be, for example, when we went and we're talking with people at a CDAO conference, a lot of people were presenting on doing machine learning and AI, you sat down and you talk with them and ask them how they're centralizing their data? And you find that they don't have any of that. They just took quality datasets, brought them into a data science platform in order to create an insight, and there's value in that. But it's not now repetitive, it's manually repetitive. It's nothing that they actually can automate and move machine learning toward artificial intelligence. So for us to be effective, we have to have this foundation that says, " Let me analyze you through these concepts." And all of these conversations that we've had, we've talked about it. We've had these core concepts for people, for process now for technology. So in our visualizations, when we go out and we talk to people about this, we do put Lambda architecture over that, so that they can understand, regardless of the velocity of data that's going to be coming in, we want to be able to meet that, all of those layers. So from beginning to end, just to make that particular clear, if you have actual IOT event based data, and you need to be able to serve it out with real time analytics, you're still going to do those five things, it's just maybe you're going to look a little bit different, or you may need different technologies, than if you're going to acquire data from some system every night and you're going to store it in a raw storage. And but it's still those five things. So maybe we repeat those, make sure I got those right, Warren. So we want to ingest, we want to be able to store, process, model and serve.

Warren Sifre: Yep.

Shaun McAdams: All right. And so now we use that as a lens, regardless of the technology, regardless if they're on- prem or if they're in the cloud. But let's go through some examples of technology that we have used to satisfy those particular five core platforms. The other thing I will say before we do that, those five things usually we'll say, " Hey, there's your minimally viable platform." If we went to a new organization and we were doing one from the ground up, and maybe they're not quite ready for a data science platform, that's okay. The serve layer could just exist as a data sharing mechanism, file transfer. It could be a BI tool, but we're going to make sure that we can get data out of this environment to folks that need it. All right. So let's go to ingest. So what are some different types of technologies that we see folks using in that ingest crosstalk?

Warren Sifre: So the ingest layer, and in this case, it could be your traditional on- premise ETL tools like your Informatica's, your SSIS. You can go to the cloud and start going down the path if Azure data factory, or Amazon Glue, or some of these third party or open source things. Like maybe being able to pull in data using spark clusters and maybe Databricks. So we've got these different means of being able to reach into source systems and bring it in. But what we want to be able to do is, be able to recognize that this tool is being selected to facilitate the characteristic of ingestion. And that we've addressed that piece. It's not, " Hey, we got this one tool that does it all. 'Well, what is all?' Well, all is anything I ever want." No. " Well, what do you want?" Let's talk about what you need to be able to do what you want. And if it's cost prohibitive, if it's something that you're not ready to go down that full, let's get an ingest, store, model, process, and have a separate tool, the high- end tool to do all these things, fine. How are you achieving these goals, meeting these five characteristics with the tools that you do have? And then as you rank your way through that journey, you start bringing in tools to be more explicit. You talked about the land, the architecture about being able to deal with data in motion, the near real- time operational reporting that you would get out of something like that. You may not have a tool to do that yet, but that doesn't mean you don't have something dealing with the ingestion. You just might need a second tool to be able to do that, if your primary tool that you're using now cannot facilitate that rapid response that you're looking for.

Shaun McAdams: Yeah. When I look at the ingestion layer now... obviously we're going to talk about storage later and that's important. That's an important aspect of what tool are you going to use. Because is your architecture set up as an enterprise data warehouse, or is it like a Data lake architecture? So if that's the sync of where you're going to store the stuff, if you're in a data warehouse architecture which a lot of organizations are in, then they may have these connector type tools, like a Fivetran, Matillion and so on and so forth, that is procuring data from some other system database. Maybe it's a file drop situation, bringing it in to put into a data warehouse. There's a lot more things that that tool is now going to have to do, because you've bypassed what we would say is this raw storage option. So you might have to do some now processing, which is one of our other things that you need to do. But for the most part we would say, " Hey, storage..." If you go back and we talk about engineering, we advocate these raw storage architectures that exist out there. So S3, if you're an Hadoop, HDFS, Google storage, ADLS or Blob storage in Azure, those are the predominant ones that we would list in the store criteria. But if you don't have a raw storage, what are some of the other options you would use for that store?

Warren Sifre: It could be a relational database, the staging area, of a relational database. And this goes back as to whether you're going to adopt an ETL or an ELT process. And again, depending on the technology that you have, where you're at one may make better sense than another. But you're going to want to land the data somewhere, and in most cases, let's assume an ELT process. You're going to do a one for one. You're going to reach into a database and pull that content out and probably store it in whatever intermediary layer in the relational engine is going to be, to make its way to the data warehouse in some way, or a Data Mart or something along those lines, in its raw form, one- to- one. We're not transforming anything because we need to be able to establish lineage. Did we get all the records, had all the fields populate, did our check sums match, all these different things. And depending on the criticality of the data, you may even have more than that. And again, this goes back to industry, it goes back to some of the requirements as to how robust this layer needs to be, but you're still storing it somewhere. You may not have a Data lake, but you've got a staging area. You got a raw data layer, you've got something somewhere and some tool is playing that role.

Shaun McAdams: So if you're an organization today, you don't have an architecture set aside for analytics. What are some of the reasons why we advocate a different architecture to do analytics or a different storage mechanism? So rather than just connecting your BI tool to the production operational areas.

Warren Sifre: Connecting your analytical and inaudible tools to a production system, there's a lot of performance impact risk there. It may not exist for you because you've got a special tool that does all kinds of magic, that the traditional locks you would get from a relational database don't exist in your world. But in most cases, most applications deal with a relational database, you've got locking and blocking that takes place. And that's a way of maintaining a lot of the transactionalization that's there, and thus connecting a visualization tool directly to a source system, is something you want to try to avoid if possible, because of the performance impact of that source system. And now, depending on the role of the source system, if you're doing a thousand transactions a second, it's going to be really felt. If it's something you're doing two transactions a day, then it's probably not going to be much impact at all. Again, it goes back to the requirements of what's there. But making our way through that and coming up with the pattern of ensuring that there's a middle layer there separating your source system from whatever serving tool you're using, is going to be key in ensuring that you have the ability to pivot when you're ready.

Shaun McAdams: You have the ability to reorganize data for performance, but also for education. So when you connect it directly to your operational systems, that data was persisted to meet the needs of that application. So data is stored the way the application needs it to be stored, not so much the way you would want it to be for analytics. Some people will get around that, and then they'll create these extracts all the time. Extracts to some manual BI tool. And it's just a repetitive daily, weekly, whatever their reporting frequency is that they have to do. They now have to usually probably do some type of processing, so let's talk about that. Because they're going to have to do it now manually every single time to make sure they got everything, that's the validation part that you were talking about. But they're probably going to change some things, because there's some bad data and they're going to have to do some quality stuff, and they're going to do it now manually. So what we would say is, " Hey, ingest the data into a data architecture built for analytics, have a place where you can persist it." That does match a data lake type of architecture, but we help organizations even if they're in a data warehouse. So if they're in a data lake, we talked about the raw storage, maybe that's going to be stored in Parquet or ORC or Avro or something like that, and then pushed up the data that's needed to a data warehouse environment. What types of things does a solution need in that processing layer?

Warren Sifre: Well, I am processing... let's define that term processing this case. Because how many people have we met through our conversations with various organizations where, " Oh, yeah. I run the support." And they're running some Cognos or Crystal reports or SSRS report. And we think that's the end of it, or at least most people in the organization think that's the end of it. And then we find out, " Yeah, I run this report and I copy and paste it into Excel. And then I run another report, copy and paste it into Excel. I go to this website and I paste it into Excel. Then in Excel, I got all these macros. I got all these formulas that I've built." Those macros and formulas, that's the processing. That right there, that transformation, the cleanup that you referred to, validating the data, ensuring the integrity, ensuring it's shaped and looks the way you want it to, the groupings, the bins, any flags that you need to establish, whether this particular record should or should not be counted in a particular way, all of those things fall under processing. But then you also have the aspect of guess what, the data just doesn't match. The data is ragged. The data is dirty. You got commas and apostrophes everywhere, that if you try and put that in certain relational database engines, it's going to struggle. It's going to be like, " Hey, you just broke my ability to do this." So that is where you would establish a means of being able to pre- process some of this stuff. Ideally you would go from ingest to store. Within the store area, that's where you may be plugging a tool to do this processing. Maybe something like Databricks or Spark or something as low tech as Python. I say low tech, because it's low cost of entry, not necessarily, it's not powerful. But it's one of those things where we can most definitely employ whatever tool necessary. It can be even PowerShell even. But something that goes in that processing that does something to that data to make it ready for model consumption, and possibly even servicing consumption.

Shaun McAdams: Yep. So you clean this data up, people are going to want to use the data before we get to that point. We talked about the fact that you're probably going to have to reorganize this in some way, shape or form. We call that model. And what we mean by that isn't that you're developing predictive model per se, but you're reorganizing this data to make it meaningful and usable to whatever's going to end up consuming it. So what are some different types of philosophies that exist for modeling, organizing data?

Warren Sifre: Well, at the model layer, you've got everything from your traditional dimensional models and your data Mart models, which is your fact and dimension concepts. And most models will end up resembling something like that, but there's other architectures. You can go down the data vault path. We can choose to, instead of having a materialized model, maybe you want to use a Lake house approach, where you are building these models on demand from the raw data and storage. And this goes back to what are the requirements and how dynamic are those requirements, and what do you need to facilitate as far as this analytical platform that you're trying to establish through these means? So the modeling piece is going to be where you are going to transform the data and shape it so that it's conducive and most performant for whatever you're serving it to. And you may have different models. You may have the exact same data, and you may say, " You know what, I need a special data Mart to be able to do this, but I want the majority of my stuff to go to the Lake house." Because the majority of the stuff is the stuff that's really, really changing a lot. And this Mart, because we're talking about a relational database change, usually it's a little bit more involved than a Lake house. Guess what, you're going to limit that scope.

Shaun McAdams: Yeah. And maybe it's just a performance related where the data that's hit by so many different types of analytics needs that warehouse mark service area. Because it's constantly handling these ad hoc queries and they just want a high level of response. Where as in your Lake house environment it could be just doing data mining, or it could be questions that are asked in a less frequent form, that you don't need to re persistent now. And you're okay if it takes a little bit longer to get it because you don't have this extra layer of processing, extra layer of a storage and all these things that you would have to do. I agree completely. It doesn't matter if your dimension modeling, you're running data ball, if you do these things that I call fit for purpose models, which is it just, " Oh, you need this analytic report and it needs these types of data values, I'm just going to create a data object that matches that," you're going to want to reorganize that data in some way, shape or form, and then you're going to get it out. So let's talk about the service layer and the types of tools and things that exist there.

Warren Sifre: So the service layer could be any number of things. Technically you can use email as a service layer, because you're going to be generating extract and sending that off to someone. And that generation of an extract could be a paginated report, your traditional SSRS, Crystal business, your Cognos reporting. You've got other things, maybe you want to have a semantic layer in between that you want to persist, maybe something within a analysis services model, or maybe a Power BI data set or a Tableau data set. You can use any of those tools that I mentioned for the visualization side. Actual transforming this model content into actionable intelligence that someone can then do dashboards, key thing that's there. But guess what, the service layer could be an interface. It could be something like MuleSoft that applies an API on top of it, that says, " Hey, I'm going to serve the content of this model via API to others in other means that may need it." So it could be an integration point that the serve layer is leveraging.

Shaun McAdams: Yeah. And we don't really go deeper than that, because you can do a lot of things within the service layer. When we talk about building the platform, we have this delineation, like when we talk about how you deliver data and any other products. So you'd go back to a previous podcast, we say you do that through platforms, through engineering and through insights. And when we talk about tech in platforms, this lower level things, we stop here because that's what's needed to be successful for the technologies that serve engineering, our insights. We definitely respect the fact that the service layer could have a lot of different BI or data science platforms. And they may think that the things that they create are the outputs. But in the sense of the platform, we're just talking about the tools that are needed to make them successful. So yeah, service later, it could be your dashboarding tool, you're embedded in analytics that you're talking about, API data science platforms, and there's so many different options now. It's actually amazing because when I think about five to 10 years ago, you were accustomed building a lot of things and now there's just so many tools that we can plug in, and so many people have created analytic products. And so now you have models that exist, that people are sharing data models that exist. I think Microsoft has some out that you can go and download for different industries, there's a few people you can buy some. IBM has some really sophisticated, large ones that we implement for predominantly in the healthcare industry. But there's a lot of data products that exist now, there's a lot of people sharing in libraries. So ML libraries and data science library. So I think that's awesome, it's exciting to see that. But all of that noise... because that's the one I feel that people focus in on more than anything. They focus in on those analytic outputs. They focus in on data science platforms, on BI platforms. And we're forgetting about the role that data architecture plays in making those successful or giving a good end user experience. Being able to enforce the data governance policies that we create, because we have tools to be able to do it, operational systems weren't designed to do that. So immediately when I talk to an organization and they're connecting their systems or that doing the extracts, you know immediately lack of data governance, or it doesn't exist.

Warren Sifre: Nope. And it's amazing how many organizations say we have it, and then the gaps we find as we make through it... Because we start asking some of these questions. How are you storing? How are you monitoring? What are you processing and so forth? And it turns out that it's a hodgepodge of somewhat shadow IT, I'm going to use whatever tool I want to use to do it. And I think the idea of these guiding principles is that, as an IT organization, trying to figure out what the data responsibilities are going to be for the IT organization versus the business units themselves, this is most definitely going to be an onion effect. We as IT professionals are going to be like, " All right, what's the platform we need to be able to manage, ingest, store, model, process." And then the serve layer, we might choose to support one, but give the business units to be able to choose whatever they want beyond that. And then what may happen is guess what, those business units may choose a service layer, a tool that does some additional transformation that maybe brings in some other data sets that they need. So guess what they're doing, they're picking a tool that is going to be needing ingestion, storage, model and process at a business unit level. And this goes back to that onion effect. We have the massive IT, this is what we're going to be responsible for, and then each business unit is going to have a little microcosm of each of these. And I think making them aware of some of this concepts will give them the opportunity to, A, choose a tool that is not going to run its life in 18 months. And more importantly, IT can pick an architecture that allows these business units to change every 18 months, to whatever they feel they need and not be bound by certain things. And this is where, if you're able to take these core concepts, these requirements for a platform and into account, as you're going through your tool selection process, as you're going through your capabilities requirement, establishment, and not only what do we need now or yesterday, but what are you going to need in the future? Will it drive the tool selection as well as, what tools are going to play in what role. And you may have one tool playing multiple roles now, because that's what you can afford or that's where you're at. But recognizing that you may want to segregate or split things off so that you have greater modularity so that you're able to evolve with the changing times as technology improves, you're able to keep up with it and you're not stuck, " Okay. I got to spend another$1.2 million to get off of this system we've been on for three years, because a new one has come out and we must have it. I've got back from a conference and we must have it." How do we do that and avoid that cost?

Shaun McAdams: Yeah. I think man, so many of those things, it just make stuff go through my mind. Because a lot of times when we're asked now to help organizations develop a data architecture for these types of workloads, naively they want to sit down and go through and have us research all their use cases, all the data, how much volume is it? All of these things, and that's very, very important. I think it's important because you need to develop that data governance policy, who are the owners of data, who are the stewards? You need to identify all of those things. But those are all use cases that you're just creating a backlog for. And a lot of those conversations have very little to do with what you actually need architecturally. And so some people, when they listened to this, I think that they'll probably relive that type of process that they've went through to create an architecture, rather than saying, " Well, Shaun and Warren say, you need tools that do these five things." But that's the reality. We've done scores of these. And some of the conversations that you'll have with people, understanding where data is, or maybe the resources that you have and the tools that they're used to, may help you select a particular tool that's right. Or if you're an organization that's highly Sequel Server based and you already have an EDW, but now it's constrained and you're looking at going to Azure, and now you're just needing advice for what that types, those are meaningful. But you don't have to talk to every business unit about every use case, in order to make a determination for the actual architecture you need to do this workload. You need to just do these five things. Do we have the tools in place to do these five things? And now we can start bringing value into the organization. But that's some of the pushback that we'll get. Some of the pushback will be, " Well don't you need to talk to everyone about what it is they want?" I want your opinion on that.

Warren Sifre: I think we've gone through a few of these, as you mentioned. And it's one of those things where we always end up in the same place. Which is, do you want a physical data lake or not? Do you want this or not? And primarily it's not because they can't use it. It's whether they want to afford that step, whether they have a process that can afford the time to get there. But the architecture, these pieces, it's usually the same core pieces.

Shaun McAdams: Yes, it is.

Warren Sifre: You some kind of ingestion tool, some kind of storage location, something to model, something to process, those four things, find a tool that gives you the capability of doing that. Keep in mind, one tool that rules them all, rules you. And what I mean by that is, you can't change very quickly when one tool is doing everything for you. So you may want to start there because it's cost effective, it gets you past that finish line immediately, because you got a time to market value that you're trying to meet. Awesome. Cool. Go ahead and do that. But recognize that for you to be able to go to the next level, you're eventually going to have to decouple. And this is where... Again, we have these conversations, we can meet with 100 business units, they're all going to tell us, " Well, I need to be able to report on these data points. I need to be able to run this Excel. I need to be able to do this, this, this." And this all source from the same place." I need to be able to pull data from HR, my finance system and my time sheet system, to be able to pull up exactly what's happening as far as my employees go." Guess what? You're ingesting your data, you're storing your data, you're modeling it, some kind of process to take data from three different systems and make sure it's cohesive and make its way in. Regardless what the business unit requirements are, you're going to need that. Now, I'm okay having data refresh once a day, I need data refresh once a minute, it's going to change the tool you pick, but the overall architecture is still the same. So the requirements in the business units will drive tool selection, it's not going to drive architecture.

Shaun McAdams: Yeah. So if you're listening, as we close this up, we have reference architectures for on- premise solutions. If you're using particular data technologies on- prem in a data warehouse design, and it's just not performing, you need to make some change. We do those evaluations. We obviously have architectures that exist in Azure and AWS and GCP, matter of fact, you have some of those on your screen crosstalk as I look over your shoulder. And so what I would advise people is just reach out if you have questions about common architectures that are being deployed and being used that are proven. But if you're getting ready to start this process of designing a data architecture for these type of analytic workloads, 100% reach out because you're probably going to go about the development and design of that in ways that we've seen other organizations do. And it's not that you won't have meaningful conversations, but it's just going to take you a lot longer to get to the point, and we can really fine tune, here's really what's important, here's the questions you really need to ask. These are the actual things that are going to identify where you place this architecture, the type of people that you have. So there's just these key things, so I'd have to have you reach out. Also, just change your perspective a little bit. If you have one currently in place, how are they supporting those five things? The inaudible of ingesting data, regardless of inaudible are motion, that you're persisting it somewhere. I think that's a big one, because there's a lot of organizations that maybe have a data warehouse type and they can't persist all types of data or bring data in at any velocity, and so they're constrained a little bit there. How they're processing the data, are the creating these just one pipeline for every little thing, or are they creating things in a way that's reusable or that's declarative that makes it a little bit simpler to do that type of processing? How are they organizing it? They have these really large sophisticated models, and now it takes a really long time to make enhancements or make changes to that, and is that the right thing? And then what tools are being used to serve it out, is that meeting the needs of the users? So any closing statements or thoughts that you have in this tech overview platform overview?

Warren Sifre: I think you hit everything on the head with that. I would challenge everybody that's going through this journey to really ask those five questions, how are we doing this? And understand that you will not be staying where you're at now from a data requirements perspective, it'll only get more, you will only be asked for more. So recognizing that you have a platform that allows you to evolve into that is going to be huge. And that will be a cost savings, C suites going to love you for it. Because you're not paying for a migration path anymore, you just swap in a simple tool, a small piece. So that would be my piece there.

Shaun McAdams: Alright. I appreciate the time Warren. So next time we come together, we're going to talk about data flow, data pipeline, focusing in on data engineering aspects, some data governance stuff. So hopefully you'll stick around for that and subscribe. If you haven't listened to the previous podcast we did on people and process, the one with Databricks on data architecture, definitely go check those out, and we'll see you guys next time.

Warren Sifre: Thank you.

Angel Leon: Thank you for listening in to this week's edition of ASCII ANYTHING presented by Moser consulting. We hope you enjoyed Shaun and Warren's conversation on data analytics. Join us next week when we continue to dive deeper with our resident experts and what they're currently working on. And remember, if you have an idea or a topic you'd like us to explore, please reach out to us through our social media channels. In the meantime, please remember to give us a rating and subscribe to our feed wherever you get your podcasts. Until then, so long everybody.

Today's Host

Guest Thumbnail

Angel Leon

|HR Advisor & Corporate Compliance Officer

Today's Guests

Guest Thumbnail

Warren Sifre

|Director of Strategy
Guest Thumbnail

Shaun McAdams

|Vice President of Data & Analytics

Recently Added