How to deliver on data initiatives in a competitive talent market?

In recent years, the term data science has become more popular due to the influx of data in all businesses. Data science is about getting valuable insights and answering questions by analyzing data using statistical methods, computing power, and automation. When a business is looking to answer a data-driven question, they must follow a set of predefined steps, known as the data science process, and know what these steps involve.

The process of data science includes more than one role. These roles within this process includes business analysts, data engineers, data scientists, and developers. Even though there can be some overlap, each of these roles is important and plays a vital part in the process. The business analyst provides the business understanding to guide the project, The data engineer prepares the data for use by the data scientist in model training,  The data scientist must understand the data to train and test the model. The developer is responsible for model deployment and operationalizing.

DataScienceProcess

These days organizations are finding it hard to retain talent for their data science processes. Fueled by big data and AI, demand for data science skills is growing exponentially, according to job sites. The supply of skilled applicants, however, is growing at a slower pace. According to a  KMPG CIO Survey, taken by over 3,600 technology leaders at companies across the U.S., showed that 46% of chief information officers see “big data and analytics” as the area most suffering from a shortage in the nation’s job market. One way to address this shortage is by partnering with vendor(s) who offer data science services. This approach is important to provide in house data science teams resources including industry knowledge, skills and experience to deliver great data products for data-driven decision making. Most of the vendors offer these services on project basis, This is a great approach to accelerate data work in large organizations, but this approach is hard to sustain for long period of time due to cost especially for small to midsize companies. This can cause the data initiatives to slow down or not get delivered. The model which I found to be more effective for long period of time especially for small to medium size businesses is the DSaaS (Data Science As A Services) model, where the client has access to the entire data science team on a monthly subscription basis. This model can keep the cost down and take away the headaches which goes along retaining a large data science team. Another reason I like this approach because it is aligned with the agile philosophy of delivery which has higher rate of success than the traditional waterfall approach.  There are few firms that are offering data strategy and engineering services in this format like datatelligent.ai that delivers customized analytics and AI solutions.

What is Data Lake Storage?

The Data Lake feature allows you to perform analytics on your data usage and prepare reports. Data Lake is a large repository that stores both structured and unstructured data. Data Lake Storage combines the scalability and cost benefits of object storage with the reliability and performance of the Big Data file system capabilities. The following illustration shows how Azure Data Lake stores all your business data and makes it available for analysis.

3-data_lake_store_concept.png

Analytics in Healthcare – Sources – Analytics – Applications

Healthcareecosystem

Roadmap to data insights

Blog_V2_11202019

Azure Data Warehouse and Azure Databricks can be used for large data analysis in a cost effective way…

Suppose you work in the analytics department of a large health system. Your organization’s IT infrastructure is hybrid both on-premise and cloud-based, and all data, including customer interactions and services information, resides in Azure SQL Data Warehouse. Your department analyzes customer services usage patterns and proposes inefficiencies in the processes based on your findings. You can achieve the desired results by using the robust machine learning and deep learning functions of Azure Databricks in conjunctions with the Azure SQL Data Warehouse.

Azure Databricks is a fully managed, cloud-based big data and machine learning platform. It enables developers to accelerate AI implementation by simplifying the process of building enterprise-grade production data applications. Built in a joint effort by Microsoft and the team that started Apache Spark, Azure Databricks provides data science and engineering teams with a single platform for big data processing and machine learning.

By combining an end-to-end, managed Apache Spark platform optimized for the cloud with the enterprise scale and security of the Azure platform, Azure Databricks makes it easy to run large-scale Spark workloads.

You can access SQL Data Warehouse from Azure Databricks by using the SQL Data Warehouse connector. SQL Data Warehouse connector is a data source implementation for Apache Spark that uses Azure Blob storage and PolyBase in SQL Data Warehouse to transfer large volumes of data efficiently between an Azure Databricks cluster and a SQL Data Warehouse instance.

Both the Azure Databricks cluster and the SQL Data Warehouse instance access a common Blob storage container to exchange data. In Azure Databricks, Spark jobs are triggered by the SQL Data Warehouse connector to read data from and write data to the Blob storage container. On the SQL Data Warehouse side, data loading and unloading operations performed by PolyBase are triggered by the SQL Data Warehouse connector through JDBC.

PolyBase is a technology that accesses data outside of a database via the T-SQL language. In Azure SQL Data Warehouse, you can import and export data to and from Azure Blob storage and Azure Data Lake Store.

Azure Data Factory is a cloud-based data integration service. It lets you create data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Data Factory supports various data stores. In this case it uses Azure SQL Database as a data source.