I'm curious what you've found works the best for finding people (employees, contractors, other resources) capable of drawing insights (or making heads/tails) out of the data? We're always trying to have a helpful perspective for customers - as well as wanting to give a great "push in the back" to get them going on that dimension as well.
I fully agree with your previous statements. I worked as a BI consultant for 5 years and 4 years inhouse. Consultant can be misleading, we really created stuff (not only hot air) ;-) ... We created visualizations (dashboards) and data models mainly with Qlik.(complete road from data extract to visualization/analytics)
I think the most efficient way is to have inhouse staff for visualizing data and extracting insights. They should be able to cover 70-90% of the demand. The remaining part and possible peaks in demand should be covered by a contractor(s).
In the long run, this makes sure you have a reliable contractor, who already knows you (the company) and also the infrastructure and meaning of your data. It helps a lot for example when an employee is sick or has left the company. So you can bridge the gap with almost no delay.
Most employees don't want (and often don't have time) to learn additional tools for anayltics & data visualization next to their daily business. And the data models quickly become too complex for "casual users". To make "self-service BI" really possible it needs a lot of work upfront (to prepare data etc.). I think I never saw "self-service BI" working in the real world. (maybe some "poweruser" in finance & controlling)
Imho the best case would be to have a specialized BI team which works together with the domain experts to create insights. Normally the persons from specific departments know their data very well and they are very helpful in the process of finding insights. They often have already done reports/calculations etc. before. But the manual process is just too complicated, too slow or whatever.
It’s an apt comparison. The “fully turn-key ‘modern data stack’” is exactly what we’re going for. A key technology difference -- managed Redshift vs. managed Snowflake. Because Snowflake separates compute and storage, the pricing and scalability as data volumes grow become meaningful.
Thanks!! More accurately put, you’re in good hands with Mozart Data. We have a product aimed to enable anyone to set up the data stack, without any data engineering needed. But ultimately, we want to help you be successful on your data journey, which is more than just a data stack - it is defining core tables, common analysis templates, and creating a great data culture.
I’m in extreme agreement for part -- for a company to get value out of their data, you want someone skilled at data cleaning, cutting it properly, and teasing out the insights. Where I disagree is that the person can be a data scientist, but doesn’t need to be. I believe that there is a growing population of data savvy employees without that title, many of them might not even have data at all in their title (they are in business operations, marketing, finance, and sales) -- many of them write SQL and are very comfortable manipulating data in BI tools, R, Python, Excel, or GSheets.
I also believe that company context matters a lot. I think so much of getting started with extracting value from data is getting up the learning curve of understanding what it means (which columns have the truth). One of the reasons that we don’t have a lot of canned reports is that understanding these edge cases within a company often matters a lot (and that not accounting for the nuance can often lead to a misinference). With this in mind, the explosion of ETL solutions and products like Mozart Data means that others at the company can specialize in their business context, as opposed to needing someone who can do all aspects of data including engineering, data science, analysis, and communicating/presenting it.
Thanks! As mentioned in other comments, we partner with and use PBF (Powered by Fivetran) for connectors we believe are best in class. We are committed to ETL reliability, and that ease of use/setup and automatically managing changes is critical for success. In addition to PBF, we leverage Singer Taps, and our team is adding to the long-tail of connectors.
We have created our own transformation layer solution, which includes scheduling, run & version history, and lineage; we do not use dbt under the hood. We share a philosophy of being able to write transforms in SQL one layer above the BI tool -- this leads to greater consistency of downstream answers and allows for business users and analysts to write the business logic into the core tables.
Functionality-wise that stack would be very similar! A core design principle of ours is that you should be able to have the power of a modern data platform even if all you know is a bit of SQL. So our product is functionally similar to a stack like Stitch+Snowflake+dbt (and we use some of those under-the-hood), but we try to wrap it all in an easier-to-use interface (e.g. typically to snapshot a table you write a few lines of config code, whereas in Mozart you just flip a toggle), and be more cost-competitive for smaller orgs.
We partner with and use PBF (Powered by Fivetran) for some connectors, which we believe are best in class. In addition, we are using Singer Taps and have also custom built some connectors. There are no additional fees for extract-transform-load, whether Fivetran or any other ETL service (we cover those). The primary additional data costs are for a BI tool, though there are a number of free options to connect to.